id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119083443
pes2o/s2orc
v3-fos-license
Dispersion and Rotation Measure of Supernova Remnants and Magnetized Stellar Winds: Application to Fast Radio Bursts Recent studies of fast radio bursts (FRBs) have led to many theories associating them with young neutron stars. If this is the case, then the presence of supernova ejecta and stellar winds provide a changing dispersion measure (DM) and rotation measure (RM) that can potentially be probes of the environments of FRB progenitors. Here we summarize the scalings for the DM and RM in the cases of a constant density ambient medium and of a progenitor stellar wind. Since the amount of ionized material is controlled by the dynamics of the reverse shock, we find the DM changes more slowly than in previous simpler work, which simply assumed a constant ionization fraction. Furthermore, the DM can be constant or even increasing as the supernova remnant sweeps up material, arguing that a young neutron star hypothesis for FRBs is not ruled out if the DM is not decreasing over repeated bursts. The combined DM and RM measurements for the repeating FRB 121102 are consistent with supernova ejecta with an age of $\sim10^2-10^3\,{\rm yrs}$ expanding into a high density ($\sim100\,{\rm cm^{-3}}$) interstellar medium. This naturally explains its relatively constant DM over many years as well. Other FRBs with much lower RMs may indicate that they are especially young supernovae in wind environments or that their DMs are largely from the intergalactic medium. We therefore caution about inferring magnetic fields from simply by dividing an RM by DM, because these quantities could originate from distinct regions along the path an FRB propagates. INTRODUCTION Fast radio bursts (FRBs) are a class of transients characterized by millisecond flashes of radio radiation (Lorimer et al. 2007; Keane et al. 2012; Thornton et al. 2013;Ravi et al. 2015). Their large dispersion measures (DMs) and Faraday rotation measures (RMs) imply that they likely occur at cosmological distances and/or in extreme density environments (see discussions by Kulkarni et al. 2014;Luan & Goldreich 2014;Lyubarsky 2014;Katz 2016a, and references therein). An important constraint on their origin is that they appear to be very common, with an inferred rate of ∼ 10 3 − 10 4 FRBs on the sky per day (e.g., Rane et al. 2016;Vander Wiel et al. 2016;Bhandari et al. 2018). Nevertheless, there have been no astrophysical objects definitively connected to FRBs, leaving their DMs and RMs as vital probes as to their mechanisms, progenitors and environments. There are multiple possible contributions to the DM and RM of an FRB. These include the disk of the Milky Way (Oppermann et al. 2012;Yao et al. 2017), the Milky Way's halo (Dolag et al. 2015), the intervening intergalactic medium (McQuinn 2014;Akahori et al. 2016), the corresponding disk and halo of an FRB's host galaxy (Xu & Han 2015;Tendulkar et al. 2017), and the FRB's immediate local environment (Connor et al. 2016;Lyutikov et al. 2016;Piro 2016;Michilli et al. 2018). There are a variety of arguments that FRBs are produced by young neutron stars (Popov & Postnov 2010;Waxman 2017;Nicholl et al. 2017). Since neutron stars are formed in corecollapse supernova (SN) explosions, FRB signals should thus pass through the expanding shell of a young supernova remnant (SNR), which should make a corresponding contribution to the FRB's DM and RM. An important conclusion emphasized by Piro (2016) is that even though the SNR may not dominate the total DM or RM, it should dominate the change in the DM or RM seen with time, as might be discernible over a time scale of several years. In this way, the environment and ultimately the source of the FRB may be better understood for a repeating FRB Piro & Burke-Spolaor 2017). These contributions should show a secular decrease at early times as the SNR expands ballistically (Piro 2016;Katz 2016b;Murase et al. 2016;Metzger et al. 2017;, but the DM from the SNR should be constant or even increasing with time once the SNR has swept up an amount of material similar to the ejecta mass (Piro 2016;. Although Piro (2016) and provide the most complete description of the SNR impact thus far, important details still remain to be explored. First, the dynamics of the reverse shock is critical for understanding the amount and geometry of the ionized material that can disperse the FRB. Although this was included by Piro (2016), the difference this introduces to the scalings with time was not sufficiently highlighted, nor was this included in subsequent works (which typically assume a constant ionized fraction). Another important issue is that core-collapse progenitors are massive stars that will have strong, magnetized, winds (Ignace et al. 1998;ud-Doula & Owocki 2002). As this wind is swept up by the expanding SNR (Chevalier 1982;Chevalier & Fransson 2003;Harvey-Smith et al. 2010), it can be an important additional contribution to the DM and RM of an FRB. Furthermore, the decreasing density profile with radius of a wind can impact the dynamics of an SNR differently than a constant density ISM as used by Piro (2016) and . The key radii are at the positions of the reverse shock Rr (white dot-dashed line), contact discontinuity Rc (black solid line), and the forward shock or blast wave R b (black dashed line). The main ionized regions, which can contribute to the DM and RM of an FRB, sit between Rr and R b . These are composed of (1) the shocked SN ejecta (between the radii of Rr to Rc) and (2) the shocked ISM (between the radii of Rc to R b ). Motivated by these issues, we investigate in further detail the DM and RM seen for an FRB and their time evolution due to an SNR and its environment. In Section 2, we consider the contributions of the SNR and a constant density interstellar medium (ISM), from the blast wave through Sedov-Taylor phases of evolution. In Section 3, we instead consider a magnetized stellar wind environment and highlight the distinct DM and RM evolution. We discuss the implications of these results for observations of FRBs in Section 4, and conclude with a summary of our work in Section 5. CONSTANT DENSITY ISM We first describe the evolution of an SN expanding into a constant density ISM. The mass distribution can be roughly divided into four regions that are summarized in Figure 1. These are, in order of increasing radius: (1) neutral, recombined SN ejecta, (2) shocked SN ejecta, (3) shocked ISM material, and (4) unshocked ISM. These are separated by three key radii: (1) the reverse shock, at radius R r , (2) the contact discontinuity between the SN ejecta and ISM, at R c , and (3) the forward shock or blastwave radius, at R b . To understand FRBs propagating through this material from an embedded central neutron star, we focus on the two shocked, ionized regions that provide sufficient free electrons to significantly disperse the FRB signal (the region between R r and R b in Figure 1). In particular, in this work we make a better distinction between R c and R b in comparison to Piro (2016). An additional source of ionized material comes from the pulsar wind nebula located near the center of the SNR. Even though the amount of ionizing emission can be especially strong in the case of a highly magnetized neutron star (Metzger et al. 2017), it still is a small contribution in comparison to the outer shocked material, and so we save a detailed study of this for future work. As the SN ejecta expand, they roughly evolve through two stages. This is summarized by the approximate analytic functions provided in Table 1 (from the work of McKee & Truelove 1995; see also the plotting of these functions in Figure A.1). First, the ejecta will be in an "ejectadominated phase," for which the blastwave radius R b is moving at roughly constant velocity independent of the density of material surrounding the SN. This continues up until the time when the SN has swept up an amount of material approximately equal to the mass of the eject. This occurs on the Sedov-Taylor timescale 4 , where E = 10 51 E 51 erg is the energy of the explosion, M = M 1 M ⊙ is the mass of the SN ejecta, and n 0 (in units of cm −3 ) is the number density of a uniform ambient ISM. Associated with this are the characteristic length scale, and velocity, In the second stage, after a time t ST , the expansion of the ejecta slows as summarized in the right column of Table 1. The velocities in Table 1 refer to the velocities of the forward and reverse shocks. In particular,ṽ r is in the rest frame of the unshocked ejecta just ahead of it,ṽ r ≡ R r /t − dR r /dt, rather than the rest frame reverse shock velocity v r ≡ dR r /dt. This is because it is the former quantity that is most relevant for estimating properties of the ejecta, such as the shock temperature and pressure. The contact discontinuity R c is estimated from the mass conservation condition, where we have used the compressibility of a γ = 5/3 strong shock condition (γ + 1)/(γ − 1) = 4. This then gives R c ≈ (3/4) 1/3 R b as we use in Table 1 for both t < t ST and t > t ST . Such a relation is most accurate at early times, but gets increasingly poor at later times when the density is not constant across the reverse shocked region (for example, see the study by Tang & Chevalier 2017). At least for the work here, this is a sufficient approximation, and we save a more detailed numerical treatment for future investigations. The evolution of the SNR is summarized with the fiducial values of M = 1 M ⊙ , E = 10 51 erg, and n 0 = 1 cm −3 in Figure 2. This shows how the SN blastwave radius roughly evolves as from the ejecta-dominated to Sedov-Taylor stages. The analytic expressions given in Table 1 allow us to follow the smooth evolution of the SNR between these limits. Figure 2 also shows how narrow the ionized regions are in radius, especially during the early phases. Following the Sedov-Taylor stage, there is the "snowplow stage" when the SNR begins to radiatively cool appreciably. We do not consider this stage in detail for this work, and thus our solutions and discussions are only applicable up until this timescale. Constant Density: Dispersion Measure For an FRB at redshift z, and assuming that the Milky Way component can be subtracted out, the remaining total DM is where DM host is the contribution from the FRB host galaxy, DM IGM is the contribution from the intervening intergalactic medium (IGM), and is the local contribution from the shocked SN material and shocked ISM, respectively. In Section 3, we consider a wind profile for the material around the SN instead, which also adds a contribution DM w to DM local . The IGM component can be approximated as (Katz 2016b) where H 0 is Hubble's constant and n IGM is the present-day density of the IGM (n IGM = 1.6 × 10 −7 cm −3 , assuming that the baryons are homogeneously distributed and ionized). A more detailed expression for this term is provided by Deng & Zhang (2014). For determining the DM that may be imprinted on an FRB by the SNR, we must consider each of the regions and the different stages of the evolution. For the SN ejecta, only the region from R r out to R c is ionized. Thus, integrating through the ionized material, the dispersion measure of the SNR is given by where n r is the number density of electrons behind the reverse shock. This density is somewhat higher than the average density of the remnant, and can be determined by assuming pressure continuity across the contact discontinuity, which gives a reverse shock mass density of where µ is the mean molecular weight. The actual electron number density in the reverse shock region is where µ e is the mean molecular weight per electron. For the ISM contribution to the DM, assuming that this is mostly hydrogen dominated, it is where the factor of 4 is from compression of material at the forward shock. The term on the far righthand side corresponds to a possible contribution from ionized ISM material surrounding the SNR , where R ISM is the extent of this region and f is the ionized fraction. For the most part, we ignore this contribution when presenting DM local , but we do discuss it further below since if may be important for the time-changing DM. Using Equations (10) and (13) with the expressions for R r , R c , and R b from Table 1, we plot the full DM evolution in the bottom panel of Figure 2. This demonstrates that DM is dominated by the SNR at early times (t < t ST ) and then dominated by the ISM at late times (t > t ST ). Furthermore, at intermediate times (t ∼ t ST ), the local DM contribution is actually rather constant. Also, note the power-law behavior at early and late times. At early times, it appears DM local ∝ t −1/2 , which is different than the scaling of DM ∝ t −2 found from simple analytic arguments that assume a constant mass fraction of ionized material (e.g., Connor et al. 2016;Piro & Burke-Spolaor 2017;Metzger et al. 2017;. A wider range of DM solutions are summarized in Figure 3, where we consider a variety of n 0 values as well as M = 10 M ⊙ and 2 M ⊙ (note that we keep E fixed at 10 51 erg for all these calculations). These masses are meant to represent the SNe that are from a red supergiant or a stripped-envelope progenitor (e.g., Type Ib/c), respectively. These solution highlight the fundamental role played by the timescale t ST , which is approximately the time at which dDM/dt switches from negative to positive. For the times with t ∼ t ST , DM can be relatively constant for hundreds of years if not more. Thus even if the DM of an FRB is not changing with time, this does not disprove the hypothesis of a rather young SN as the FRB Truelove (1995). Note thatṽr is not the rest frame velocity of the reverse shock but rather the velocity in the frame of the unshocked ejecta just ahead of it,ṽr ≡ Rr/t − dRr/dt. b These expressions for DM and RM are in the extreme limits of t ≪ t ST and t ≫ t ST . For the more detailed evolution, one should consult Figures 3 and 5. c Upper limit since the parallel magnetic field could be smaller than the field assumed for this estimate. progenitor. At late times, the DM only depends on n 0 , but interestingly, at early times both M and n 0 . This is different from simpler estimates of the ballistic phase which might assume that only M is important at early times. To better understand these simple scalings, and provide useful formulae for comparison to future observations, in the following sections we consider the behavior of the DM in the limits of early and late times. Constant Density: Ejecta-Dominated Stage DM Estimate Taking the limit t ≪ t ST , and using the expressions given in Table 1, the thickness is roughly (11) and (12) then results in This demonstrates that the density is going down like t −3 , as one might assume for material expanding with constant velocity. Furthermore, since t ST ∝ n −1/3 0 , this density is in fact independent of n 0 as one would expect during the ejectadominated phases. Using Equation (10), the dispersion measure of the SNR is given by As mentioned above, this scaling ∝ t −1/2 is very different from that found from previous simpler estimates that use ∝ t −2 . The main difference is that those works assumed a constant fraction of material ionized. Instead, the ionized radial extent should scale with R c − R r , which is growing with time much faster than linearly as shown in Equation (14). Another important difference is that this DM now includes a dependence on n 0 (as was seen in Figure 3). This is because the larger the n 0 is, the more strongly the reverse shock is driven back into the ejecta to ionize the material. Again, like DM SNR , we derive a shallower scaling with t in comparison to other estimates in the literature due to a more realistic description of the ionized extent. The other main region of free electrons is the shocked ISM material that is swept up within the region between the blastwave radius and the contact discontinuity, Using Equation (13), this contribution is where we take f = 0. For a neutral ambient medium (i.e., f = 0), this DM is actually increasing with time as more and more ISM material is swept up. Nevertheless, the overall contribution is orders of magnitude smaller than the SN contribution (as seen on the lefthand side of Figure 2) and is not expected to be seen directly at early times. Constant Density: Sedov-Taylor Stage DM Estimate Next, in this limit t ≫ t ST , the radial extent of the swept up ISM instead scales as The corresponding DM is where t 1000 yr = t/1000 yr and in the last expression we assume f = 0. This is the same scaling as presented by , with a similar prefactor within ≈ 15% of their result. Taking the derivative, we find where here we have included a factor −f n 0 dR b /dt due to ionized ISM material being swept by the forward shock. This shows that an increasing DM is possible for f 0.4. Constant Density: Rotation Measure Shocks driven during the expansion of the SNR can generate magnetic fields that may imprint themselves on an FRB through Faraday rotation. Following Piro (2016), we consider the magnetic fields generated by the forward and reverse shocks, assuming that the magnetic fields roughly obey equipartition with the shock velocities. For the reverse shock, the magnetic field is then where ǫ B is a parameter that sets how much of the shock energy goes into the magnetic field. Assuming equipartition between the forward shock and the magnetic field generated in the ISM, the corresponding field strength is The velocities and corresponding magnetic fields are plotted in the upper panel of Figure 4 for ǫ B = 0.1, M = M ⊙ , E = 10 51 erg, and n 0 = 1 cm −3 . This shows the general trend that the magnetic fields are rather constant at early times, but then decrease during the Sedov-Taylor phase. The associated rotation measure for a density of ionized n with line-of-sight component of the magnetic field B || is A useful relation for relating the RM and DM of the i-th region within the system is This expression is used to plot the RM evolution in the bottom panel of Figure 4. Here we assume that RM SNR and RM ISM can be simply added together to get RM local . Just as for the DM evolution, the RM is dominated by the SNR at early times and the ISM at late times. The RM can be very large at early times, and the changes in RM can be quite substantial even if the changes in DM are small. Furthermore, while DM can be decreasing, roughly constant, or increasing depending on the time, the RM is strictly decreasing for this scenario. The full set of solutions for red supergiant and stripped envelope SNe are summarized in Figure 5. Just as for the DM, at late times the RM only depends on n 0 , while early on it depends on both M and n 0 . Unlike the DM, we do not include the ionized ISM material (highlighted with the ionized fraction f ) since it is not clear that this material should have an ordered magnetic field. In the following sections, we derive the analytic scalings for these dependencies at both early and late times. Constant Density: Ejecta-Dominated Stage RM Estimate From Equation (11), ρ 1/2 Combining this with using v b ≈ 1.37v ST in the limit t ≪ t ST , and substituting this into Equation (23), the magnetic field is found to be roughly constant with time as where ǫ −1 = ǫ B /0.1. The associated rotation measure is then This provides the ∝ t −1/2 scaling seen from the full solutions in Figures 4 and 5. Furthermore, we see directly that the RM depends on both n 0 and M . Constant Density: Sedov-Taylor Stage RM Estimates Using v b from Table 1 for t ≫ t ST with Equation (24) The corresponding RM is The RM is indeed decreasing shallower than at early times and no longer depends on M . 3. WIND ENVIRONMENT While the previous discussions assume a constant density ISM surrounding the SN, in many cases the circumstellar environment will be from a wind from the massive progenitor. This is likely especially important for FRBs if they come from young neutron stars (Connor et al. 2016;Piro 2016). A wind can significantly alter the DM evolution, and also provide another source of magnetic field through the magnetized wind. For a constant mass loss rateṀ , we consider a constant velocity wind density profile where K =Ṁ /4πv w and v w is the velocity of the wind. The wind mass loading parameter has a typical value of whereṀ −5 = 10 −5 M ⊙ yr −1 and v 6 = v w /10 6 cm s −1 . Throughout our analysis we focus on varying K rather thaṅ M and v w individually since this is the primary parameter that determines the evolution. To better understand the SNR evolution under the influence of a wind environment, we derive a set of analytic equations for the characteristic radii in analogy to the constant ISM case. This derivation is provided in the Appendix, with a summary of the resulting analytic functions in Table 2. As with the constant density ISM case and the Sedov-Taylor scale, for the wind there is a characteristic radius and timescale which divides the ejecta-dominated and wind-dominated stages of the evolution. From the solutions in the Appendix these are found to be given by Equations (A26) and (A27), which when written in physical units are and t ch = 1.9 × 10 3 E −1/2 51 M 3/2 where K 13 = K/10 13 g cm −1 . The general evolution of the SNR in the wind case is summarized in the upper panel of Figure 6. This shows that in this case the blastwave evolves as FIG. 6.-Sample evolution of a SNR and the resulting DM for fiducial values M = 1, M ⊙ and E = 10 51 erg, expanding into a steady wind with K = 10 13 g cm −1 ; this combination corresponds to t ch = 1.9 × 10 3 yr. The top panel shows the evolution of the three key radii Rr (red long-dashed line), Rc (black solid line), and R b (blue short-dashed line). The red and blue shaded regions denote the shocked SN ejecta and wind, respectively. The bottom panel shows how the DM evolves and is generally dominated by the SN ejecta, although if this were followed until even later times the wind would begin to contribute more which is steeper at late time in comparison to the constant ISM case. This is because the SNR is expanding into material that has a decreasing density with radius and thus not inhibited as strongly. This also means that the timescale t ch can tend to be fairly long in comparison to the Sedov-Taylor timescale. For example, if we ask at what radius the wind density is similar to the constant density ISM, i.e., ρ w /(µ e m p ) = n 0 , we find r = K µ e m p n 0 1/2 = 0.79µ −1/2 e K 1/2 13 n −1/2 0 pc, (36) which is much less than R ch . This indicates that if t t ch is applicable to a given system, then the SNR is likely actually sitting within a bubble excavated by the wind. Wind: Dispersion Measure Similar to the constant density ISM case, we use pressure equality to solve for the electron density in the reverse shock region, where ρ w (R b ) = K/R 2 b is the density just ahead of the forward shock. From this we can again solve for the DM of the SNR using Equation (10). We assume in most cases that the wind itself will also have a significant ionized component, either because the wind is intrinsically ionized or because the shock breakout (Matzner & McKee 1999) ionize the wind. The wind's DM can then be broken into two components, the shocked and un-shocked wind, which are determined according to and respectively. The evolution of all the components DM SNR , DM w,sh , DM w,unsh are plotted in the bottom panel of Figure 6. Unlike the constant density case, here the DM is always strongly decreasing because even in the winddominated stage the wind density is getting smaller with radius. Over the timescales plotted here generally DM SNR ≫ DM w,sh , DM w,unsh , even for t > t ch . We note though if this evolution were followed to even later timescale (t 10 6 yrs for these specific parameters) the wind component would begin to dominate. Just as in the constant density case we next solve for the DM in the limits of early and late times. Wind: Ejecta-Dominated Stage DM Estimate Taking the limit t ≪ t ch , the thickness of the region heated by the reverse shock is R c − R r = 1.11(t/t ch ) 3/2 R ch = 2.3 × 10 −4 E 3/4 51 M −5/4 1 K 1/2 13 t 3/2 yr pc. (40) This is generally found to be larger at early times than the constant density case because the large density near the star more readily pushed the reverse shock back into the ejecta. This though grows more slowly with time ∝ t 3/2 rather than the constant density case that grows as ∝ t 5/2 . To estimate the density of the reverse shocked region we use Equation (37) and approximate in the t ≪ t ch limit that v b ≈ 1.78v ch andṽ r ≈ 1.16(t/t ch ) 1/2 v ch (from the relations in Table 2). This results in Putting this together with the thickness of the shocked regions provides This is much larger than the constant density case because of the extremely large density for the wind in close proximity to the SN, which is more effective for driving the reverse shock. The DM then falls off more quickly with time than the constant density case because of the decreasing density of the wind. As noted above, the wind has two contributions to the DM, which are from the shocked and unshocked regions. The shocked wind has a thickness The density of this region is estimated to just be the shocked wind density 4ρ w µ e m p = 1.0 × 10 4 µ −1 e E −1 51 M 1 t −2 yr cm −3 . Putting these together, the shocked wind contributes a dispersion measure of There is also a wind contribution from all of the unshocked wind material outside the radius of the forward shock Since this scales the same as the shocked regions, these can just be added together to provide Note that this is still subdominant to the SNR contribution at these times. Wind: Wind-Dominated Stage DM Estimate As mentioned above, the wind-dominated stage may only occur at very large times because t ch is rather large. Nevertheless, with the caveat in mind, we can still solve for the DM. At sufficiently late times this is dominated by the shocked and unshocked wind material (even later than the times shown in Figure 6). In this late time limit, the width of the shocked wind material where t 10 4 yr = t/10 4 yr and the density is Putting these together results in a DM from the shocked region of Just as for early times, there is also a contribution from the unshocked wind as long as it is ionized. This is With the total DM being There is also a contribution from the SNR itself, but we ignore it here since it is comparable to the wind component we already account for and it does not have a simple power law solution. This is included in the plots though, such as Figures 6 and 7, and this is the reason there is still a non-negligible dependence on M at the latest times plotted. Wind: Rotation Measure A wind environment is also interesting because it can provide an ordered magnetic field that can be swept up by the SNR. Thus for the wind case we focus on this possible contribution to the RM rather than shock generation of magnetic fields as for the constant density case. Consider a toroidal magnetic field with the functional form This is basically a split monopole that has been wrapped up by the star's rotation. The wind's contribution to the RM is determined by flux freezing of the swept up magnetic material (as discussed by Harvey-Smith et al. 2010). Once the forward shock has reached a radius R b , the swept up magnetic field is where x ≡ v rot /v w and we assume R b ≫ R * . If the magnetic field within the shocked wind region is B ′ φ , then the magnetic flux of this material is Equating these two fluxes allows us find for the shocked magnetic field strength. The rotation measure is then given by This is plotted for a variety of different parameters in Figure 8. This demonstrates that the RM drops dramatically because of the combination of both the density and magnetic field strongly decreasing with time. Nevertheless, the RM can be very high at early times, especially if the magnetic field is larger than the modest field we assume here. Also note that x, R * and B * are fixed here even though in detail they should be different for different types of massive progenitors. Wind: Ejecta-Dominated Stage RM Estimate Using the expressions given above allow us to estimate the early-time magnetic field where x 0.1 = x/0.1. The total RM is then Thus the RM contribution from the wind can be considerable. FIG. 9.-The RM versus DM evolution for all the constant ISM density models considered in Section 2. In comparison, measured values for FRBs are shown with solid symbols. In the case of the repeating FRB 121102, the local DM can be known from the localization of the source, and thus this is plotted with a square. All other FRBs are plotted as upper limits on DM, since they are not localized and a significant fraction of their DM could be from the IGM. Wind: Wind-Dominated Stage RM Estimate For the late-time evolution, we use the same analytic expression from Equation (58) to derive 10 4 yr µG. (60) Multiplying this by the DM results Thus the RM becomes somewhat shallower with time at late times, although it is so small at this point it may be negligible. COMPARISON TO FRB MEASUREMENTS We now consider the implications of the DM and RM evolution described in the previous sections for the DMs and RMs observed for FRBs. In Figures 9 and 10, we plot the RM versus DM evolution for all of the constant density and wind models considered in Sections 2 and 3. As a comparison, we plot all FRBs with measured values of RM and DM with solid symbols. In the case of the repeating FRB 121102 (Spitler et al. 2014Scholz et al. 2016), the local DM can be known from the localization of the source, and thus this is plotted with a square. Furthermore, its RM has been measured to vary from (1.33−1.46)×10 5 rad m −2 (Michilli et al. 2018). All other FRBs are plotted as upper limits on DM, since they are not localized and a significant fraction of their DM could be from the IGM. Their measured RMs are available in Masui First examining Figure 9, we see that FRB 121102, which has the best known values for these properties, is actually fairly consistent with these estimates if the ISM is sufficiently dense (n 0 ∼ 100 cm −3 ). Furthermore, a large n 0 would help the dDM/dt to be rather small as been observed for this FRB over many years because t ∼ t ST . This would imply an age of the SNR of ∼ 10 2 − 10 3 yrs, depending on the mass of the ejecta, which would still be a young NS, but old enough that free-free absorption of the FRB should not be a problem as described in the theoretical work of Piro (2016) or the empirical study by Bietenholz & Bartel (2017). Most recently, it has been revealed that the RM of FRB 121102 has decreased over a ∼7 month timescale, while the DM has remained relatively constant (Michilli et al. 2018). This is again qualitatively consistent with our results when the SNR is near the Sedov-Taylor timescale. The other FRBs are potentially more difficult to reconcile with this picture. Although the DM values are upper limits, the low RM values indicate that the local DM must be very small. Furthermore, if this is the case, then it would be difficult to satisfy both the DM and RM unless the ISM densities are much smaller than what we infer for FRB 121102. Comparing to Figure 10, the situation is seemingly reversed. Now it is FRB 121102 that is inconsistent with any of the models unless the magnetic field were a factor of ∼ 10 4 higher. On the other hand, the other FRBs are fairly consistent with the wind models. Even though these DM values are upper limits, they could still be reconciled if lower by a factor of ∼ 10 or more by just adjusting the magnetic field of the progenitor star. An outstanding question remains of whether all FRBs are the same or if the repeater should be considered in a separate class. Interestingly, these comparisons here argue that the combined DM-RM values are yet another way the repeater FRB 121102 appears to be unique compared to the other FRBs. This may mean that the environments are fundamentally different. Instead though, it could be that the environments are actually similar, but that the repeater is being observed in a different stage of evolution. As Equation (36) highlights, the wind may not extend as far as the typical Sedov-Taylor length scale. Thus, one could imagine that a given system could be wind dominated at early times (like the non-repeaters appear to be) but be more like a constant density ISM case at later times (like the repeater). Comparisons like this will be important in the future to better classify the ways in which FRBs are different or the same. CONCLUSIONS AND DISCUSSION Motivated by the hypothesis that FRBs are from young neutron stars and thus should be embedded within SNRs, we have revisited the impact of an SNR on FRBs. This includes both constant density ISM and wind environments, and for the latter case we derived new analytic solutions for the SNR evolution summarized in Appendix A and Table 2. In each case, we provided analytic expressions both for the DM and RM values. These are split into early times, which correspond to the stage when the blastwave is moving at constant velocity (t < t ST or t < t ch for the constant density ISM and wind cases, respectively) and late times, which is when the SNR has swept up an amount of material comparable to its mass (t > t ST or t > t ch ). Our main conclusions are as follows. • The DM and RM are mostly determined by two regions: SN ejecta heated by the reverse shock and the surrounding material heated by the forward shock. • At early times, the DM is dominated by the SN ejecta, but it is not the case that DM ∝ t −2 as normally assumed in the literature. This is because of the dynamics of the reverse shock, which results in a shallower scaling for DM and a dependence on the density of the surrounding medium. • At intermediate times (t ∼ t ST ), the DM for the constant density ISM case can be rather constant for hundreds of years if not more, so that a young neutron star hypothesis should not be ruled out if DM is not observed to change for a repeating FRB. On the other hand, the RM is found to always be decreasing. • For the wind case, the DM always decreases with time. Furthermore, a magnetized wind swept up by the SN provides another region that may contribute to the RM observed for FRBs. • The DM and RM for the repeating FRB 121102 appear consistent with the constant density case if the ISM density is large (n 0 ∼ 100 cm −3 ), which would also help explain why dDM SNR /dt is small. This implies an age of the FRB progenitor of ∼ 10 2 − 10 3 yrs, depending on the SN ejecta mass. Furthermore, its decreasing RM while the DM is relatively constant is again qualitatively consistent with this interpretation. • A constant density ISM is difficult to reconcile with the other FRBs (because of their lower RM values) unless a significant fraction (> 99.9%) of their DM is from the IGM and host galaxy. • On the other hand, the wind case seems to naturally fit with most FRBs that are not the repeater. If this explains their DMs and RMs, it would argue that these FRBs are rather young and thus should have strongly decreasing DM and RM values if seen to repeat. • A significant contribution can be made to the RM even if the DM is not dominated by the SNR and is instead mostly due to the IGM. This means one should be cautious about inferring a magnetic field from observations by using the ratio RM/DM (as done in Ravi et al. 2016) if different regions of electrons are contributing to each of these quantities. Considering the final point, the magnetic field generating the RM may be estimated when the RM and/or DM vary, since this helps separate the contribution of free electrons near to the FRB from the IGM contribution. For example, Katz (2018) shows that using the upper bound on the variation in DM when RM varies can place a lower bound on the magnetic field. We emphasize though that simply assuming a given system will only be the constant density case or wind case is probably an over simplification. In general, one could imagine a SNR at first mostly being dominated by a wind, but then evolving to a constant density case once it has overtaken the extent of the wind. In such cases, as highlighted by the discussion of the wind extent at the beginning of Section 3 and Equation (36), one might expect the t < t ch solutions to be most applicable at early times, but actually the t > t ST solutions to apply later. This issue, as well as our currently simplistic treatment for following the contact discontinuity (see the discussion at the beginning of Section 2), argue that the next stage for this research necessitates numerical models of the SNR evolution. This would allow for more complicated density distributions for the surrounding material. In addition, it would allow us to consider a more realistic density distribution for the SNR itself, where instead of just assuming a constant density sphere as done here, it should in fact have a steep outer density gradient (e.g., Truelove & McKee 1999). Looking beyond this, multi-dimensional simulations would be useful to resolve the complicated filamentary density that is seen for real SNRs. This may cause the DM and RM to vary significantly from what we calculate here, and thus our work represents the average properties at any given time. Such simulations would help for understanding the size and statistical properties of the deviations from this average. Ultimately though, one would like to see more repeating FRBs, since this work demonstrates that changes in the DM and RM values can strongly constrain the environment of the FRB. Even in the comparisons shown in Figures 9 and 10 there appears to be some dichotomy between the repeater and those FRBs that have not been seen to repeat. Actually localizing some of these other FRBs that have both a DM and RM measurement would allow the IGM component of their DMs to be subtracted. This would improve our understanding of their local DMs, and we would have a better idea of how different these bursts really are. In lieu of this, large statistical samples of FRBs may also be helpful, as expected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME; CHIME/FRB Collaboration et al. 2018). CHIME will be especially important because its low frequency range of 400 − 800 MHZ is sensitive to the free-free absorption cutoff expected from SNRs (Piro 2016), providing additional information about the age of the system that can be folded into the analysis presented here. where K =Ṁ /4πv w and v w is the velocity of the wind. Just as with the constant density ISM case, there are characteristic scales in this case analogous to the Sedov-Taylor scales. Here we just refer to these with the subscript "ch" for characteristic, and from dimensional analysis the characteristic radius and timescale must obey and respectively. Also useful is the relation between the SN energy and the maximum ejecta velocity which we will be using throughout the derivation. A.1. Ejecta-Dominated Stage, t < t ch Just as for the constant density case, the evolution can be separated into two stages. These are an ejecta-dominated stage for t < t ch and a Sedov-Taylor stage for t > t ch . We derive the general evolution in each stage and then require continuity to connect the two solutions. Here we start with the ejecta-dominated stage. As shown in Figure 1, we envision mass M ejected in a SN explosion, which generates a contact discontinuity R c with corresponding forward shock and reverse shocks with radii R b and R r , respectively, as it moves into the surrounding medium. A key property of the SNR is the pressure ratio between the forward and reverse shocks, where ρ e (t) = 3M/4πv 3 e t 3 is the density of the ejecta, ρ w (v e t) = K/(v e t) 2 is the wind density at a radius v e t, v b = dR b /dt is the blastwave (forward shock) velocity, andṽ is the velocity of the reverse shock in the rest frame of the unshocked ejecta just ahead of it (defined to be positive). Following McKee & Truelove (1995), a key principle we will use for finding analytic solutions to the evolution is assuming that this pressure ratio is roughly constant and equal to the value found in numerical simulations of the ejecta-dominated stage where this specific value is from the numerical calculations by Truelove & McKee (1999). The other key estimate is the ratio of the blastwave to the contact discontinuity, ℓ = R b /R c , also known as the lead factor. Again we assume that this is constant and approximated by the ejected-dominate stage where this value is taken from the work of Hamilton & Sarazin (1984). As with φ(t), we take the early time limit where R r ≈ R c and thus also approximate R b ≈ ℓ ED R r and v b ≈ ℓ ED v r for t → 0. As an aside, one could instead use mass conservation and assume a constant density behind the forward shock to estimate the lead factor. From mass conservation, comparing the shocked mass to the swept up mass, one finds where the factor of 4 is from the compression at the forward shock. Solving this equation leads to R b = (4/3) 1/3 R c or a lead factor of ℓ = (4/3) 1/3 ≈ 1.10, slightly smaller than the value ℓ ED we use above. This is because in reality the density is not exactly constant in the region between the forward shock and the contact discontinuity. Using these above approximations, we can then simplify Equation (A5) to the form where we have introduced the constant We can alternatively write Equation (A10) in terms of R r , which is a first-order differential equation in R r . Integrating this equation with the requirement that R r (t) ≈ v e t for t → 0 results in Utilizing Equation (A6),ṽ r (t) = v e C ED φ 1/2 ED t 1/2 1 + C ED φ 1/2 ED t 1/2 −3 . Again matching the early-time limits, the blastwave radius and velocity are given by and v b (t) = ℓ ED v e 1 + C ED φ 1/2 eff t 1/2 −3 , where we have replaced φ ED with φ eff φ ED to represent the loss of pressure felt by the forward shock as the SNR evolves away from the ejected-dominated stage (which has a stronger effect on the forward shock in comparison to the reverse shock). As we shall show below, the continuity conditions allow us to uniquely calculate φ eff . A.2. Wind-Dominated Stage, t > t ch For sufficiently large t, the SNR evolution must obey the classical Sedov-Taylor solution for the wind profile (Ostriker & McKee 1988), which is given by Taking the derivative of this expression Integrating this with the boundary condition that R b (t ch ) = R ch results in for the general form of the blastwave radius. The reverse shock is only weakly accelerated during the Sedov-Taylor phase, as represented by the small factor of 0.03 in the expressions for R r andṽ r in the Sedov-Taylor stage in Table 1. The exact value can be calibrated with numerical simulations, but here we just assume, similar to the constant density case, a small acceleration withã r ≈ 0.1ṽ r (t ch )/t ch . The exact value of this does not impact our DM calculations since for t > t ch the DM is dominated by swept up wind material. Integration with constant acceleration then givesṽ r =ṽ r (t ch ) +ã r (t − t ch ). (A20) Next, we solve the differential Equation (A6) to find R b (t). This is facilitated by making a change of variables u = R r /t, using the fact thatṽ r = −tdu/dt, solving for u(t), and then transforming back to R r (t), resulting in R r (t) = t {R r (t ch )/t ch −ã r (t − t ch ) − [ṽ r (t ch ) −ã r t ch ] ln(t/t ch )} , for the reverse shock evolution. A.3. Connecting the Stages Exact expressions for t ch and R ch can be derived by requiring continuity of solutions between the ejecta-dominated and winddominated stages. Utilizing Equation (A15), (A16), and (A18) and requiring continuity of R b and v b results in the expressions, ℓ ED v e t ch 1 + C ED φ 1/2 eff t 1/2 ch and These coupled equations have two unknowns R ch and t ch that can be solved for algebraically. Since we know the scaling expected for R ch and t ch from Equations (A2) and (A3), this process is easiest if we substitute and t ch = BE −1/2 M 3/2 K −1 , where A and B are dimensionless. This allows all dimensional factors to cancel from Equations (A22) and (A23). Combining the two equations allows us to cancel B and find a family of solutions φ eff (A). A critical point is calculated from this function, defined as when dφ eff (A)/dA = 0, which results in a value of φ eff ≈ 0.0479. This can then be substituted back in to find A and B. The two characteristic scales are then These can all be substituted back into the time evolution equations summarized above to solve for R b (t), v b (t), R r (t), andṽ r (t) in both the ejecta-dominated and wind-dominated stages. These results are summarized in Table 2. A comparison of the solutions found here to the case of a constant density ISM are plotted in Figure A.1. During the ejectadominated stage, both cases show similar evolution for R b , but the wind case shows stronger evolution of R r because the early high densities push the reverse shock back into the ejecta more strongly. At later times, the wind case is more gradual, since the blastwave is moving into lower density material. This causes the reverse shock to finally reach the center of the SN ejecta at later times as well.
2018-06-28T23:44:23.000Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "ba18bbb9e577cd26eb07fcfdbfa6d3e14fe3e68b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.01104", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ba18bbb9e577cd26eb07fcfdbfa6d3e14fe3e68b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233450321
pes2o/s2orc
v3-fos-license
A New Method for Correcting Urbanization-Induced Bias in Surface Air Temperature Observations: Insights From Comparative Site-Relocation Data The effect of urbanization on surface air temperature (SAT) is one of the most important systematic biases in SAT series of urban stations. Correcting this so-called urbanization bias has the potential to provide accurate basic data for long-term climate change monitoring and research. In the western region of the Yangtze River Delta, 42 meteorological stations with site-relocation history from 2009 to 2018 were selected to analyze the statistical characteristics of the differences in comparative site-relocation daily average SAT. The annual average differences in comparative site-relocation SAT series between the old and the new stations (SATDON) were used to characterize the impact of urbanization bias on the air temperature observation series. Using remote sensing technology, spatial datasets of land-use, landscape, and geometric parameters of the underlying surface in the 5-km buffer zone around the station were established as the observed environmental factors of the site, and the differences in these observed environmental factors (DOEFs) between the old and the new stations were calculated to indicate the change induced by urbanization. Next, multiple linear regression models of SATDON and DOEFs were constructed, showing that the error range of the model for simulated SATDON was 3.66–18.21%, and the average error was 10.09%. Finally, this new correction method (NCM) and conventional correction method (CCM) were applied to the correction of the urbanization bias of SAT series at Hefei station. After comparison, it is found that the NCM could reveal clear contributions of the rapid and slow stages of the urbanization process and resultant environmental changes around the stations to the observed SAT. In summary, the NCM based on remote sensing technology can more reasonably and effectively correct the urbanization bias caused by local human activities, as well as reduce the error caused by the selection of reference stations via the conventional correction method. The effect of urbanization on surface air temperature (SAT) is one of the most important systematic biases in SAT series of urban stations. Correcting this so-called urbanization bias has the potential to provide accurate basic data for long-term climate change monitoring and research. In the western region of the Yangtze River Delta, 42 meteorological stations with site-relocation history from 2009 to 2018 were selected to analyze the statistical characteristics of the differences in comparative site-relocation daily average SAT. The annual average differences in comparative site-relocation SAT series between the old and the new stations (SAT DON ) were used to characterize the impact of urbanization bias on the air temperature observation series. Using remote sensing technology, spatial datasets of land-use, landscape, and geometric parameters of the underlying surface in the 5-km buffer zone around the station were established as the observed environmental factors of the site, and the differences in these observed environmental factors (DOEFs) between the old and the new stations were calculated to indicate the change induced by urbanization. Next, multiple linear regression models of SAT DON and DOEFs were constructed, showing that the error range of the model for simulated SAT DON was 3.66-18.21%, and the average error was 10.09%. Finally, this new correction method (NCM) and conventional correction method (CCM) were applied to the correction of the urbanization bias of SAT series at Hefei station. After comparison, it is found that the NCM could reveal clear contributions of the rapid and slow stages of the urbanization process and resultant environmental changes around the stations to the observed SAT. In summary, the NCM based on remote sensing technology can more reasonably and effectively correct the urbanization bias caused by local human activities, INTRODUCTION Urbanization directly affects the types of land use/cover and anthropogenic heat emissions around meteorological stations, leading to major changes in the observation environment (Gallo et al., 1996;Peterson, 2006;Trusilova et al., 2008;Chen et al., 2020), which in turn has an important impact on the accuracy, representativeness, and homogeneity of meteorological observation data (Davey and Sr, 2005;Vose, 2005). The contribution of the socalled urbanization bias (the effect of urbanization on surface air temperature (SAT), the list of abbreviations used in this article and their expanded names can be found in Appendix A) to meteorological observation data usually stems from changes in the observation environment against the background of urbanized areas (Ren et al., 2017). The urbanization bias is the largest systematic bias in SAT observation records in China and correcting this bias has the potential to provide accurate basic data for large-scale climate change monitoring and research (Wen et al., 2019b). Urbanization bias has received a great deal of attention in the literature (Hansen et al., 2001;Fujibe, 2009;Zhang, 2009;Zhang, 2014;Wen et al., 2019a). Zhang (2009) used the method of subtracting the warming trend of rural stations from the warming trend of urban stations to correct the regional average SAT series of urban stations and obtained the regional average SAT series after removing the urbanization bias. Fujibe (2009) divided the meteorological stations in Japan into six categories in terms of the population density within a certain radius around the city station and corrected the urbanization bias in the third-sixth-category sites using the first and second types of stations as reference stations. Hansen et al. (2001) corrected the urbanization bias of one typical station by utilizing the two-stage linear trend based on the assumption that the SAT increased linearly in two periods. Zhou et al. (2019) pointed occurrence probability of the heatwave events in summer over the Yangtze River Delta is closely related to the contribution of urbanization effect. These imply that the correction method of urbanization bias is very crucial to explore accurately the regional climate change. However, the conventional correction method (CCM) of urbanization bias still has some shortcomings as follows: 1) many studies have utilized the population density or city size as the criteria for classifying meteorological stations. For example, Bai and Ren (2006) chose meteorological stations with a population of more than 100,000 as urban stations, but Liu (2006) divided the stations with a population of more than 40,000 and the stations that were not described as "rural" into urban stations. However, there have also been some studies that have utilized satellite remote sensing data to select reference stations, such as Zhang. (2014), who visually selected the stations outside the closed contour as reference stations in the temperature field retrieved from remote sensing data. Thus, it can be seen that there is no unified standard for the selection of reference stations, and it is difficult to find a pure reference station near the urban station as reference stations are inevitably affected by urbanization, so the urbanization bias in the SAT series is the minimum estimate (Zhang, 2014). 2) Previous studies corrected the SAT series based on the assumption that the urbanization bias presents a linear increase trend (Hansen et al., 2001;Zhang, 2009). However, in reality, the urbanization processes at different times and in different regions are variable, so it is impossible to subdivide the specific degree of contribution of the urbanization bias to the SAT series on temporal and spatial scales. In addition, there are considerable differences in the mechanisms and magnitudes of the impact of urbanization on different temperature elements (Li et al., 2014), despite the possibly limited contribution to regional warming (Chao et al., 2020), while its impact on extreme temperatures are huge (Li and Huang, 2013;Li et al., 2014;Zhou et al., 2019). In order to improve the representativeness of the observation environment of meteorological stations, many stations with severely damaged observation environments have been relocated. Taking 2015 as an example, 92 meteorological observation stations across the country were relocated in this year alone (Meteorological Observation Centre of CMA, 2013; Comprehensive Observation Department of China Meteorological Administration, 2015). According to the requirements of "the criterion of surface meteorological observation," "protection methods for meteorological exploration environment and facilities," and other documents formulated and issued by China Meteorological Administration, site selection has a series of strict restrictions on factors such as altitude, distance, and obstacles. The area around the relocated station should be dominated by open vegetation, and the representativeness of the meteorological observation environment must have been greatly improved. Meteorological observation series can represent the climate background of the region (Yang et al., 2013;Yang et al., 2017), so relocated stations can be used as relatively pure reference stations. In addition, "the criterion of surface meteorological observation" stipulates that the relocation of meteorological stations must involve the carrying out of at least one year of comparative observations between the new site and the old site, and the difference in comparative site-relocation annual average SAT between the old and the new stations (SAT DON ) provides high-quality data for us to study the impact of urbanization bias on the SAT series. Therefore, SAT DON can reduce the error caused by the selection of reference stations via the traditional urban-rural comparison method. The meteorological observation environment refers to the environmental space constituted by the minimum distance necessary to avoid various interferences and ensures that the facilities of the meteorological observation station accurately obtain the meteorological observation information. With the rapid development of remote sensing technology, the use of satellite data to study changes in the meteorological environment has become an emerging method (Yang et al., 2013;Li et al., 2015;Shi et al., 2015). Yang et al. (2013) evaluated the observation environment by using land use/cover and normalized difference vegetation index (NDVI) in the buffer zone around the meteorological station. Li et al. (2015) quantitatively studied the relationship between land use/cover change (LUCC) and the thermal environment in the buffer zone and subdivided the stations into three types by the contribution index of the thermal environment. The above researches show that it is feasible to utilize satellite data to investigate and study the observation environment, and it has the advantages of visualization and remodeling. However, existing remote sensing research on the observation environment only uses indicators such as LUCC and NDVI and does not fully consider the impact of the spatial pattern and configuration of different land-use types on the observation environment. Consequently, this study uses remote sensing technology to establish land-use parameters, landscape parameters, geometric parameters, and other spatial datasets around meteorological stations to characterize the differences in observation environment factors (DOEFs) between the old and the new stations and analyzes and discusses the physical mechanisms by which urbanization bias influences the SAT series. The Yangtze River Delta (YRD) urban agglomeration is one of the most highly urbanized areas in China for the past 30 years (National Bureau of Statistics, 2019). However, the development of Anhui in the western region of the YRD has been relatively slow, having not developed rapidly until the past 10 years. Therefore, the observation environments of national meteorological stations in Anhui Province have been seriously damaged in the past 10 years, and a large number of stations have been forced to relocate on a frequent basis (Meteorological Observation Centre of CMA, 2013; Comprehensive Observation Department of China Meteorological Administration, 2015), and this provides us with an opportunity to study the process of urbanization and station relocation. In summary, taking Anhui Province as the research area, meteorological stations with site-relocation history were selected in this study, and the SAT DON results between the old and the new stations were used to characterize the impact of urbanization bias on the SAT series. Landscape parameters, geometric parameters, and other spatial datasets in the 5 km buffer zone around the stations were established to characterize the DOEFs between the old and the new stations, and statistical models of the SAT DON and DOEFs were constructed. This paper corrected the urbanization bias of the SAT series at a typical station by the new method and the conventional method, respectively, and the advantages of the new method were discussed finally. DATA AND METHODS Data 1) Ground observation data. The SAT data mainly include national reference climatological stations, which observe 8 times a day (once every 3 h); national basic meteorological station, which observes four times a day [02:00, 08:00, 14:00, and 20:00 BT (Beijing time)]; national general meteorological stations, which observe three times a day (08:00, 14:00, and 20:00 BT) and obtain the dailyaveraged SAT by calculating the arithmetic mean of the temperature values observed for each time per day. 2) Satellite remote sensing data. The remote sensing data used in this study were Landsat data from the United States' EOS (Earth Observation System) for the detection of earth resources and the environment. Specifically, this study uses the remote sensing images of the Landsat-7/ETM+ (Yao et al., 2010) and Landsat-8/ OLI (Saputra et al., 2017) sensors to study the changes in the observation environment of the stations relocated before 2013 and after 2013, respectively. A comparison of the band information of the above two remote sensing images is given in Table 1. Selecting Samples for Relocated Stations For this study, we selected meteorological stations with siterelocation history as the research samples from 2009 to 2018, according to the historical evolution data and comparative observation data of the relocated stations, surveys and evaluation reports of the observation environment of the national ground meteorological stations, and high-resolution satellite remote sensing images. The selection criteria were as follows: 1) the main reason for the relocation was that the observation environment of the station had been seriously damaged; 2) in order to minimize the influence of the difference of regional and local climate background, the difference in altitude between the sites (before and after relocation) was less than 50 m, and a horizontal distance between the sites of 20 km was selected according to previous studies (Wen et al., 2019;Shi et al., 2011); 3) there was no significant difference in topography; and 4) the type of observation instrument, the frequency of daily observations, and daily mean methods of temperature series did not change before and after station relocation. Based on the above criteria, 42 samples of relocated stations were selected, as shown in Figure 1. The relocated station samples include 25 urban stations and 17 reference stations, according to the meteorological station classification method of Ren et al. (Ren et al., 2010), and the samples were evenly distributed throughout northern Anhui, the Yangtze-Huaihe region, Yangtze River area, southern Anhui, and other regions. Therefore, the samples in this study can represent the impact of the urbanization development level of different regions in Anhui Province on different types of stations. Determining the Research Range of the Station Buffer Zone Studies (Cai, 2008;Yang et al., 2013;Shi et al., 2015;Yang et al., 2020a) have shown that since the observation height of the thermometer shelter in the observation field is 1.5 m, the maximum impact of urbanization on the observation data usually does not exceed 5 km under advection and turbulence transport conditions. Therefore, for this study, we selected a station buffer zone with a center radius of 5 km to quantitatively study the impact of environmental changes on the SAT series. Establishing a Dataset of Characterization Parameters of the Observation Environment in the Buffer Zone Land-use parameters (Carolina et al., 2013) reflect the results of various land resource utilization activities produced by human beings, which are an important part of urban environmental change research. This study uses the supervised classification method to classify land use in ENVI software and establishes four parameter indicators: built-up area ratio (AR BT ), water area ratio (AR W ), vegetation area ratio (AR V ), and bare land area ratio (AR B ). The landscape parameters mainly include the largest patch index (LPI) (Wu, 2000) and the mean fractal dimension (FRAC_MN) (Wu, 2000) of the land type. The LPI represents the dominant land type in the study area. The larger the LPI value, the more obvious the advantage of this type of patch in the overall landscape. The FRAC_MN represents the index of the patch shape. The larger the FRAC_MN, the more complex the shape of the patch and the more discrete the patch distribution. For this study, eight parameter indicators were calculated in the landscape index software Fragstats, including the built-up largest patch index (LPI BT ), water largest patch index (LPI W ), vegetation largest patch index (LPI V ), bare land largest patch index (LPI B ), built-up mean fractal dimension (FRAC_MN BT ), water mean fractal dimension (FRAC_MN W ), vegetation mean fractal dimension (FRAC_MN V ), and bare land mean fractal dimension (FRAC_MN B ). The geometric parameters mainly include the distance between the stations and the gravity centers of different land types in the buffer zone, and the distance between the station and the city center (Liu et al., 2014). For this study, we used ArcGIS software to extract the land types of "built-up," Frontiers in Environmental Science | www.frontiersin.org April 2021 | Volume 9 | Article 625418 "water," "vegetation," and "bare land" in the station buffer zone, then used the "Calculate Geometry" function to obtain the gravity centers of the different land types, and finally, the "Point Distance" function could then be used to calculate four parameter indicators, including the distance between the station and the gravity center of built-up land (DIS BT ), water (DIS W ), vegetation (DIS V ), and bare land (DIS B ). In the same way, the parameter indicator of the distance between the station and the city center (DIS C ) could be obtained in the ArcGIS software. The current urbanization bias correction scheme still has deficiencies, mainly due to the limited assessment indicators for local observation environment around meteorological stations. Landscape ecological morphology (Figure 2) can be used to explore the relationship between the spatial pattern of urban land use and urban local microclimate (Zhou et al., 2011;Estoque et al., 2017). Landscape composition can distinguish land-use types, and landscape configuration can fully consider the respective geographic characteristics of different land-use types. In addition to the conventional land-use assessment indicators, therefore, our present work employs landscape ecological indicators and geometric indicators to assess observation environment around station. Finally, based on correlation analysis, six indicators, that is, AR BT , AR W , LPI BT , LPI W , DIS BT , and DIS W , were finally selected. Simulation and Correction Method for the Urbanization Bias in the SAT Series This article starts with the physical causes of the impact of urbanization bias on the observation environment and simulates the degree of impact of the urbanization bias on the SAT series by constructing statistical models of SAT DON and DOEFs. Multiple linear regression is a statistical analysis method to determine the quantitative relationship between a dependent variable and multiple independent variables (Lynn, 2007;Li, 2020). Assuming there is a linear correlation between the dependent variable Y and the k independent variables X 1 , X 2 ,..., X k , then the functional relationship between Y and X can be expressed as: where β is the regression constant; β1, β2, . . ., β k are the regression coefficients; and ε is the regression residual. After substituting the land-use, landscape, and geometric parameters in the buffer zone around the station into Eq. 1, the simulated values of the changes in the SAT series could be obtained, and then the urbanization bias could be corrected by the simulated values: Here, i is the year number from the earliest year of recording to the latest year of correcting, T′ i is the annual average SAT after correction in the ith year (°C), and ΔT i is the change in the annual average SAT series caused by urbanization bias in the ith year compared with the earliest observation year (°C). Case Analysis of a Typical Station Hefei National Meteorological Observation Station had been completely surrounded by built-up land before relocation because of the process of urbanization in recent years ( Figure 3); the observational environment score of Hefei station was only 63.2. After relocation, Hefei station moved 30.2 km to the northwest of the old site, with an altitude difference of 6.0 m, and the observation environment of the station greatly improved, with the score increased to 99.3. Table 2 shows the DOEFs between the old and the new stations in the 5-km buffer zone. AR BT decreased from 42.17 to 4.23% after relocation, indicating that the area of built-up land around the station was greatly reduced; the FRAC_MN BT declined to a certain extent, indicating that the distribution of built-up patches around the station was more concentrated than before relocation; and DIS BT increased from 0.53 to 3.13km, indicating that the built-up land type had weakened the urbanization impact of the station after relocation. The parameters of water, vegetation, and bare land also improved to varying degrees. In addition, the SAT DON in 2018 showed that the annual average SAT of the new station ( Figure 3B) was 0.83°C lower than the old station ( Figure 3C) and the decline reached 4.8%. In summary, the representativeness of the observation environment at Hefei station improved after relocation, and the SAT DON could represent the degree of the impact of the urbanization bias on the SAT series. Analyzing the Statistical Characteristics of the Samples' Daily Average Differences For this section, daily-averaged SAT DON series were close to a normal distribution and fluctuated in the range of −2.3-4.4°C (Figure 4). The sample size, mean, and standard deviation were 15,347, 0.572, and 0.568°C (Table 3), respectively. The above statistics showed that the sample had a large variation range, but the data distribution was mainly concentrated near the mean value, and the overall sample volatility was relatively small. The kurtosis value of the sample was 2.057, the number of samples with a daily-averaged SAT DON of 0.4°C was the largest, reaching 1,515, and the number of samples with a daily-averaged SAT DON at 0.2-0.8°C reached 9,193, accounting for 59.6% of the total number of samples, indicating that the daily-averaged SAT DON series was steeper than the normal distribution. The sample skewness value was 0.673, and the number of daily-averaged SAT DON values greater than the mean was 8,226, accounting for 53.6% of the total sample and indicating that there were more points on the right-hand side of the data distribution, close to the mean. In addition, there were 828 negative values in the sample, accounting for 5.39% of the total number of samples, which means that the SAT series of the old stations were lower than the new sites (Figure 4). The influence of the meteorological station observation environment on the SAT series was more complicated. Buildings cause the wind speed to decay downwind and reduce air circulation in the observatory, thereby enhancing the locality of temperature observation. However, under unstable stratification conditions during the daytime, the shadowing effect of solar radiation caused by buildings and aerosol cooling effects might make the SAT observed by the stations surrounded by buildings lower than the stations with open terrain (Li et al., 2011;Zheng et al., 2018;Zheng et al., 2020;Yang et al., 2020b). Correlation Analysis of SAT DON and DOEFs A total of 37 samples were selected from the relocation samples to analyze the correlation between SAT DON and DOEFs, and existing buffer parameters were filtered in order to establish a revised model of urbanization deviation in the next step. Figure 5 presents the statistical significance test results and correlation coefficient histogram between the SAT DON and DOEFs, in which the solid bars represent the significance level of the correlation reaching 0.05, while the hollow bars represent the opposite. SAT DON had a significant positive correlation with AR BT after relocation, and the correlation coefficient reached 0.7843, which passed the significance level of 0.05. SAT DON and AR W showed a significant negative correlation, with a correlation coefficient of −0.4819, which also passed the significance level of 0.05. This showed that with the continuous increase in built-up land around the meteorological station, the decrease in heat capacity of the underlying surface and the increase in anthropogenic heat in the buffer zone led to warming in the SAT series. The heat capacity of water bodies is relatively large, meaning heat in the buffer zone of a station could be taken away as water evaporates, which would lead to a drop in the SAT series (Zeng et al., 2010). In addition, the SAT DON also had a high correlation with LPI BT , LPI W , DIS BT , and DIS W after relocation, which showed that the more obvious the advantages in the buffer landscape and the closer the distance of the station to the built-up center of gravity, the greater the SAT DON , while for water this was opposite. Accordingly, this article uses six indicators (AR BT , AR W , LPI BT , LPI W , DIS BT , and DIS W ) to study the response SAT DON to the change in the DOEF in the buffer zone. Simulation and Accuracy Evaluation of Urbanization Bias in the Annual Average SAT Series The parameter indicators in the buffer zone have undergone great changes after relocation. As shown in Figure 6, the change values in the proportion of built-up area (ΔAR BT ) of all the relocation samples were positive, which shows that the area of built-up land around the relocated stations was reduced and 92.18% of the ΔAR BT values were concentrated in the range of 0-50%. The number of stations with a negative change value in water area ratio (ΔAR w ) reached 22, which showed that the water area of most stations increased after relocation. The change values of the built-up LPI (ΔLPI BT ) of all the relocation samples were positive, and 92.18% of the ΔLPI BT values were concentrated in the range of 0-20. The number of stations with a negative change value of water LPI (ΔLPI W ) also reached 22, which showed that the water advantage of most stations increased after relocation. All the change values of the distance between the station and the built-up center of gravity (ΔDIS BT ) were negative, which showed that all samples of relocated stations were far away from the center of gravity of built-up patches. The change value of the distance between the station and the builtup center of gravity (ΔDIS BT ) was negative, revealing that all samples of relocated stations were far away from the center of gravity of builtup patches. The number of stations with a positive change value of the distance between the station and the water center of gravity (ΔDIS w ) reached 24, which showed that most samples of relocated stations were close to the center of gravity of built-up patches. For this part of the study, we used statistics to analyze the response relationship between the SAT DON and DOEFs and simulate the impact of the urbanization bias on the SAT series. The sample was subjected to colinearity diagnosis in SPSS; the statistical models of SAT DON and DOEFs were constructed finally: Here, ΔT avg is the annual averaged SAT DON of meteorological stations. Table 4 shows coefficient of determination (R 2 ) for stepwise regression of the fitted model. With the increase of independent variable, the R 2 of the model increases. The R 2 of the fitting model finally reached 0.953, which passed the 0.05 significance test, indicating that the above five influencing factors have a crucial impact on SAT DON . According to Eq. 3, the change values of the annual average SAT of the remaining five relocated stations in the sample were simulated to compare with the real change value of the sample. As shown in Table 5, the difference between the simulated and real value fluctuates in the range of 0.014-0.108°C. The simulation error range is 3.66-18.21%, and the average error is 10.09%. DISCUSSION The conventional correction method (Zhang, 2009;Zhang, 2014;Wen et al., 2019a) involves gradually decreasing the annual average urbanization impact from the earliest year of the target station series. The corrected series represents the Frontiers in Environmental Science | www.frontiersin.org April 2021 | Volume 9 | Article 625418 8 regional annual average SAT series in which the urbanization bias has been removed: Here, i is serial number from the earliest year of recording to the latest year of correcting, T′ i is the annual average SAT after correction in the ith year (°C), T i is the annual average SAT before correction in the ith year (°C), and ΔT u-r is the difference in the SAT warming rate between the urban and reference station (°Cdecade −1 ). It should be noted that Eq. 4 has an assumption that the urbanization bias shows a linear growth trend. For this part of the study, we take the annual average SAT series of Hefei station from 1953 to 2018 (homogenization correction was carried out to remove discontinuities or jumping points caused by the relocation) as an example to discuss the correction of the urbanization bias. The ΔT u-r of Hefei station was 0.065°Cdecade −1 with Shouxian station selected as the reference station (see Figure 1). Because remote sensing images before the 1950s are not easy to obtain, and the observation environments of meteorological stations were basically unaffected by urbanization, we set the initial value of the various parameters in the station buffer zone in the earliest record year to be 0. We used the new correction method based on remote sensing to correct the urbanization bias of Hefei station. According to the development process of Hefei's urbanization, the remote sensing image of six times (1979, 1987, 1998, 2004, 2009, and 2018) covering the Hefei area was selected (Figure 7). The five parameters of AR BT , AR W , LPI W , DIS BT , and DIS W were interpreted and substituted into Eq. 3 to obtain the change values of the annual average SAT series, and then the urbanization bias was corrected using Eq. 2. In addition, we also used the CCM to correct the urbanization bias of Hefei station in the above the remote sensing image of six times, and the results obtained by the CCM and NCM methods were compared and analyzed. The correction results obtained by the CCM were higher than those of the NCM ( Table 6). The CCM did not take into account the impact of the urbanization bias on the reference station, and therefore, the urbanization bias obtained from the reference station was the minimum estimate. The rate of urban development in Hefei was relatively slow before 2004. From 2004 to 2018, the total GDP of Hefei increased by ¥723.321 billion, with an annual average growth rate of 81.77%, and its economic growth rate ranked first in the YRD region (National Bureau of Statistics, 2019). The warming rate in the SAT series caused by the urbanization bias should change with economic development, but the warming rate at Hefei station obtained by the CCM was a fixed value (0.065°Cdecade −1 ), and this assumption that the impact of urbanization increases linearly year by year over time is questionable (Zhang, 2009). The results of the NCM show that the urbanization bias of Hefei station increased gradually from 0.233 to 0.457°C from 1979 to 1998. Due to the relocation of Hefei station in 2004, the observation environment improved significantly, and the NCM-based urbanization bias between 2004 and 2009 did not increase much, but the CCM-based Frontiers in Environmental Science | www.frontiersin.org April 2021 | Volume 9 | Article 625418 9 urbanization bias was increasing over time because station relocation was not taken into account. The urbanization bias of Hefei station increased quickly from 0.436 to 0.851°C as the city experienced rapid development from 2009 to 2018. The NCM constructed in this study produces results that are dynamically consistent with the observation environment of the station and the development of the city. In summary, the present work study mainly focused on the sample application exploration of our new urbanization bias correction method, which can make up for the shortcomings of the conventional linear method. We will find more relocation stations in the whole Yangtze River Delta region to extend our new method application in the future. Based on the R 2 of the fitted results (Table 4), it is clear that all the selected parameters can explain more than 90% of the urbanization bias. In addition, urbanization is not only reflected by the two-dimensional horizontal urban expansion but also by the vertical morphology of the three-dimensional urban spatial structure. Previous studies suggested that the vertical geometry of urban canopy building also had an impact on local microclimate (Oke, 2004;Bonacquisti et al., 2006;Chen et al., 2020). In the future, we will expand three-dimensional indicators to supply the indicators of urbanization bias correction. CONCLUSION In this study, we selected 42 meteorological stations with siterelocation history in the western region of the YRD from 2009 to 2018 as research example samples and then utilized annual SAT DON series between the old and the new stations to characterize the impact of the urbanization bias on SAT series. We proposed a new method for correcting urbanization-induced bias in surface air temperature observations based on comparative site-relocation data. The main conclusions are as follows. Spatial land-use, landscape, and geometric parameters of the underlying surface in the 5-km buffer zone around the station were good to be as the DOEFs of the site. The comparative analysis revealed that parameters such as AR BT , AR W , LPI BT , LPI W , DIS BT , and DIS W in DOEFs had the highest correlation with SAT DON , with absolute values of correlation coefficients exceeding 0.4, passing the 0.05 significance test. After colinearity diagnosis, a new linear regression model between five parameters (AR BT , AR W , LPI W , DIS BT , and DIS W ) and SAT DON was finally constructed to correct urbanization bias, which clearly reflected the effects of rapid and slow phases of urbanization and environmental changes around the site on the observed SAT. The CCM did not take into account that the reference station was affected by the urbanization, which may underestimate urbanization bias. In addition, CCM cannot consider the station relocation situation, which may overestimate urban bias when the station relocated. In contrast, the NCM constructed in this study can make up these shortcomings to correct the urbanization bias caused by local human activities more reasonably and effectively and can also reduce the error caused by the selection of reference stations in the traditional urban-rural comparison method. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, and further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS TS: methodology, formal analysis, results and discussion, and writing-original draft preparation; DS, discussion, and writing reviewing and editing; YH, discussion, and writing reviewing and editing; GL, discussion, and writing reviewing and editing; YY: conceptualization, data curation, methodology, results and discussion, and writing-reviewing and editing. ACKNOWLEDGMENTS This study was supported by the National Key R and D Program of China (Fund No: 2018YFC1506502), NSFC-DFG (42061134009) and the Beijing Natural Science Foundation (8202022 and 8171002). The data that support the findings of this study are openly available. The Meteorological Information Center of the China Meteorological Administration provided the meteorological data (http://data.cma.cn/site/index.html); and the remote sensing data used in this study were Landsat data from the United States' EOS (Earth Observation System) refined by Department of Earth System Science/Institute for Global Change Studies Tsinghua University (http://data.ess.tsinghua. edu.cn/).
2021-04-30T13:20:08.323Z
2021-04-30T00:00:00.000
{ "year": 2021, "sha1": "f81f9d46542ac3afea1a76aaf3e69bd5dd46baa8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2021.625418/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f81f9d46542ac3afea1a76aaf3e69bd5dd46baa8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
238200533
pes2o/s2orc
v3-fos-license
Association of Personality Traits with Dietary Habits and Food/Taste Preferences Background: Personality plays an important role in food choices. The aim of this study was to assess the association of personality traits with dietary habits and food preferences. Methods: This cross-sectional study was carried out on 224 healthy female students aged 18–30 years with a normal BMI. Dietary habits, food preferences, and personality were assessed using validated questionnaire. Results: Our results showed that neuroticism and openness were associated with low scores while conscientiousness was related to high scores of dietary habits (r = -0.33 P < 0.001, r = -0.13, P < 0.05 and r = 0.26, P < 0.001, respectively). In addition, neuroticism was correlated with preference to salty, sour and fatty foods and negatively associated with dairy products (P < 0.05). Extraversion showed a positive correlation with preference to fast foods, ice cream, chocolate, cocoa, and negative correlation with meat. Openness was positively correlated with preference for meat and biscuit and negatively correlated with fruits (P < 0.05). Agreeableness was related to having soft drinks and sweetened fruit juices and conscientiousness had a positive association with preference to dairy products, vegetables, nuts, food with salty tastes, and a negative association with biscuits (P < 0.05). Conclusions: Overall, assessing personality traits could be useful to identify young women who may be at risk of unhealthy dietary habits. Introduction This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. For reprints contact: reprints@medknow.com "traditional diet" were related to lower levels of openness. [5] Another study conducted on couples from the University of North Carolina Alumni Heart study demonstrated that only openness was associated with ratings of dietary quality for both wives and husbands. [6] To the best of our knowledge, limited studies have examined the potential relationship between personality and taste preferences. In a study conducted by Byrnes and Hayes, personality constructs of sensation seeking and sensitivity to reward showed positive correlations with preference for spicy foods. [7] Kikuchi and Watanabe reported that students who scored high for neuroticism prefer salty and sweet tastes. In addition, the individuals with high scores of openness and agreeableness did not like salty tastes. [8] In Dublin Business School students, however, high scores for conscientiousness and openness were not correlated with food preferences. [9] No study, to date, has reported the association of personality with preferences for different food groups. Dietary habits are likely to be related to historical, geographical, and cultural settings. Furthermore, young women as future mothers will have a pivotal role in establishing family habits and would benefit from advice on healthy nutritional preferences. Therefore, the present study was designed to assess the association of personality traits with dietary habits and food preferences in female students. Participants This cross-sectional study was conducted on a sample of 224 female students aged 18-30 years at Ahvaz Jundishapur University of Medical Science, Ahvaz, Iran, from December 2013 to April 2014. The study population was 3500 students. The initial sample size of 196 subjects was obtained in accordance to ratio formula based on 95% confidence level (α = 0.05), Z = 1.96, P = 0.5 and d = 0.07 and it was then increased to 236 subjects for more coverage. Considering the variety of personality traits in a given population, the maximum ratio of a proportion was considered 50 percent. Finally, 224 individuals entered the study. The participants were recruited from 7 faculties of the University (faculty of pharmacy, medicine, para-medicine, health, dentistry, rehabilitation, nursing, and midwifing) using a multistage random-sampling method. All subjects included in this study were free from chronic diseases such as diabetes, heart, kidney and liver disease and cancer and had no mental disorders. They were not on special diets or medications and also were not smokers, pregnant, or lactating. Measures Personality traits were assessed using a validated Persian version [10] of the NEO-Five Factor Inventory (NEO-FFI), which consists of 60 items which measures the basic structure of normal personality according to five major dimensions of personality. These are, also called the big five and include neuroticism (tendency to experience negative emotions), extraversion (quantity and intensity of one's interpersonal interactions), openness to experience (proactive seeking and appreciation of new experiences), agreeableness (tendency to have faith in other people and to be eager to help them) and conscientiousness (degree toward goal directed behaviors). Each personality trait is addressed by 12 items. The five responses vary between "strongly disagree" to "strongly agree" options on a Likert scale. [11] The responders obtain a score of 0-4 on each question and a total score of 0-48 for each of the five dimensions of personality traits. The NEO-FFI has been shown to be a valid and reliable assessment tool. [12,13] The dietary habits questionnaire contains 20 questions that were designed by the authors. Some examples of dietary habits items included "eating breakfast", "having irregular meals during the day", "eating even being satiety", "eating fast", "consumption of junk foods when being hunger", "adding salt to meals", "drinking water or other drinks with meals", " eating along with doing something else such as watching TV". The responses vary between "always" to "never" options on a five-point scale of 1 to 5. So, taken together, the least score of this questionnaire was 20 indicating the unhealthiest dietary habits and the maximum score was 100 implying the healthiest dietary habits. The validity of the questionnaire was evaluated by five nutritionist professors. Drawbacks were examined and some further questions were added, deleted, or modified to make the final questionnaire. To assess the reliability of questionnaire, a pilot study (n = 20) was undertaken on a separate sample of female students that was confirmed by 0.75 Chronbach alpha coefficient. A self-administered questionnaire on food preferences was designed by the authors to assess the different food groups and kinds of taste preferences based on 19 questions. Of these, 5 items referred to preferences for sweet, salty, sour, spicy, and fatty tastes. The other 14 items assessed the preferences for different food groups including bread and cereals, meats, grains, milk, and dairy products, fruits, vegetables, biscuit, cake and cookies, ice cream, chocolate and coca, soft drinks and sweetened fruit juices, chips and puff, tea and coffee. Responses to the items were on a four-point scale (high, moderate, a little, never). The content validity of questionnaire was confirmed by the same nutrition experts and refinements were made to make the current questionnaire. Pilot testing (n = 20) on a sample of female students and Chronbach alpha coefficient of 0.72 was obtained. Procedures Prior to study, each student was briefed about the purpose of research and assured of confidentiality and anonymity and written consents were obtained from all participants. Subjects were asked to complete the mentioned questionnaire as honestly as possible with the first response which came to mind. Completion of the questionnaires took approximately 30 min. Height was measured to the nearest 0.1 cm and body weight was measured to the nearest 0.1 kg. Body mass index (BMI) was calculated by dividing the weight (in kg) by the square of height (in m). Personal information including age, marital status, place of residence and physical activity level were also recorded. The project was approved by University Medical Ethics Committee. Data analysis All analyses were performed using SPSS version 19.0 and the results were considered significant if P value was less than 0.05. Association between personality traits and score of dietary habits were analyzed using Pearson's correlation coefficient. Spearman's correlation coefficient was also applied to assess the association between personality traits and food preferences. In addition, stepwise multiple regression was conducted to identify which personality traits were more correlated with the dietary habit score. Results More than half of students were resided in university dormitories and more than two-third of them reported low levels of physical activity [ Table 1]. The mean dietary habit score of study population was 70.9 ± 9.2. The highest and lowest scores of personality traits belonged to conscientiousness and neuroticism, respectively [ Table 2]. High neuroticism (r = -0.33, P < 0.001) and openness (r = -0.13, P < 0.05) were significantly associated with low scores of dietary habits. In addition, high conscientiousness was significantly related to high score of dietary habits (r = 0.26, P < 0.001). There was no significant correlation between dietary habits and age, weight, height, BMI, marital status, place of residence, and physical activity levels. The same results were seen for personality traits. As shown in Table 3, in the first step of multiple regression, high neuroticism was the most predictor of low dietary habits score, explaining 11% of the dietary habits score (model 1). In the second step, high openness was predictor of low dietary habits score inferring that both neuroticism and openness explained 13% of the dietary habits score (model 2). In the final model, high conscientiousness was a significant contributor for high score of dietary habits inferring that 3 mentioned personality traits totally explained 16% of the dietary habits score (model 3). The link between personality traits and food groups preferences is shown in Table 5. High neuroticism had a negative relationship to preference for milk and dairy products (r = -0.15, P < 0.05). High levels of extraversion showed a positive correlation with preference for fast foods (r = 0.15, P < 0.05) and ice cream (r = 0.14, P < 0.05), chocolate and cacao (r = 0.19, P < 0.05) and a negative relationship with preference for meats (r = -0.21, P < 0.05). High openness, was positively correlated with preference for meat (r = 0.18, P < 0.05) and biscuit, cakes and cookies (r = 0.15, P < 0.05) and was negatively associated to desire for fruits (r = -0.17, P < 0.05). In addition, high agreeableness indicated a negative preference for soft drinks and sweetened fruit juices (r = -0.17, P < 0.05). High conscientiousness showed a positive correlation with preference for milk and dairy products (r = 0.19, P < 0.05), vegetables (r = 0.19, P < 0.05) and nuts (r = 0.18, P < 0.05) and negative relationship to preference for biscuit, cakes and cookies (r = -0.13, P < 0.05). No significant association was seen between food preferences or personality traits and subjects' age, weight, height, BMI, marital status, place of residence, and physical activity levels. Discussion The present study was aimed to assess the possible associations between personality traits and dietary habits and also food preferences. It demonstrates that high neuroticism was significantly associated with low score of dietary habits and is consistent with previous studies. In the Helsinki Birth Cohort Study, Tiainen et al. found that neuroticism was associated with lower fish and vegetables intakes [14] and was positively related to endorsing the "convenience diet" and negatively associated with following the "Mediterranean diet" in another study. [15] Whilst, in overweight and obese women concluded that higher neuroticism had a positive relationship with dis-inhibition and susceptibility to hunger. [16] Other studies have shown a negative correlation between neuroticism and healthy eating. [1,17,18] One of the facets of the neurotics is hastiness and individuals with this character are unable to control their lust, passion desires even for consuming the food. So, in high scores of neuroticism, the desires are so strong that they cannot be controlled. The other aspects of neuroticism are feeling depressed and vulnerability to stress [11] leading them to choose unhealthy options. [19] So, it seems that association of neuroticism with unhealthy dietary habits mediates indirectly by adopting counter-regulatory emotional eating and responding to negative emotions and stress by unfavorable food habits. [17,20] There was no significant association between high extraversion and the score of dietary habits. Brummett et al. did not report any significant link between extraversion and self or spousal rating of dietary quality. [6] Goldberg and Strycker also suggested that extraversion had no significant correlation with General Healthy Diet in the members of Eugene-Springfield Community Sample. [21] On the other hand, Mõttus et al. performed another study in Estonians aged 18-89 years and concluded that higher extraversion was related to higher scores on the "health aware diet". [5] Provencher et al. suggested that high extraversion has a positive association with disinhibition and susceptibility to hunger. [16] We observed that high openness was significantly associated with low score of dietary habits, and is in accord with Kikuchi and Watanbe's study indicating a negative relationship between high openness and avoidance of burnt fish or meat. [8] However, in some studies openness was related to a "general healthy diet", [21] high consumption of fruits and vegetables, [14,17,22] low consumption of confectionary items and chocolate. [14] Furthermore, openness was positively related to following the "health aware diet" and negatively related to following the "traditional diet". [5] In terms of agreeableness, no significantly association was observed between this trait and the score of dietary habits and is similar to the results of Goldberg and Strycker's [21] and Brummett et al. [6] However, Cho et al. found a positive association between agreeableness and good dietary habits in college students, [23] while Provencher et al. also reported that higher agreeableness predicted a lower susceptibility to hunger. [5] In our study, high conscientiousness showed a positive association with high score of dietary habits and agrees with Provencher et al. who found that high scores on conscientiousness were positively related to cognitive dietary restraint and negatively correlated to susceptibility to hunger. In another studies having a healthy diet was associated with conscientiousness. [17,20,21] Conscientious individuals have features such as loyalty, being trier to success and caution in making decision, and are also purposeful and determined. [11] So, they seem successful in accepting nutritional education and following the healthy dietary habits. Indeed, the relation between conscientiousness and healthy dietary habits is indirectly mediated by promoting regulatory restrained eating (i.e., selective restrain of energy intake) and reducing emotional and external eating (i.e., eating when external food cues present in the environment). [17,19] As the interpretation of the regression of personality traits on dietary habits score, it is important to note that among the big five personality traits, high neuroticism was the strongest inverse predictor of dietary habits score. It was followed by high openness and then high conscientiousness as the most predictor of healthy dietary habits. Taken together, these three mentioned personality traits can predict healthy or unhealthy dietary habits. In terms of association of personality traits and food preferences, high neuroticism was shown positively correlated to preference for salty, sour and fatty foods and negatively correlated to preference for milk and dairy products. The neurotics prefer unhealthy tastes and foods to overcome their negative feelings. [19] High extraversion showed positive correlation with preference for fast foods, ice cream, chocolate and cacao and a negative relation to preference for meats. Extraverts were social, warm, loving and tend to participate in social groups. [11] They are also willing to positive emotions such as joy, happiness and love. Accordingly, extraverts may tend to eat varied snacks like fast foods, ice cream, chocolate and cacao which can be considered as pleasurable and positive emotions. High openness was also positively correlated with preference for meats and Biscuit/cake/cookies and negatively associated to desire for fruits. Generally, individuals with high openness seek diversity and are flexible in actions and behaviours. These features are usually seen in various activities such as eating unusual foods. These people prefer novelty and diversity rather than routine activities and due to their intellectual curiosity, experience for new foods. [11] Accordingly, they may prefer to consume certain food groups with higher variability and to use new food items like Biscuit/cake/cookies. High agreeableness showed a negative relation to preference for soft drinks and sweetened fruits juices. Those individuals with high scores on agreeableness have features such as trust, simplicity, companionship and also having high potency in adaptation to people and environment. [11] So, it seems that they are successful in endorsing the healthy dietary habits and, therefore, they seldom consume soft drinks and sweetened fruit juices. High conscientiousness had a positive correlation with preference for milk and dairy products, vegetables and nuts and negative relationship to preference for salty foods and biscuit/cake/cookies. Conscientious individuals seem eager for achieving healthy dietary habits and avoiding from harmful consequences. [11] The present study had particular strengths. It is the first to explore the relation between personality traits and preferences for different food groups. Secondly, the dietary habits questionnaire was completed in a stable mood state while participants were not hungry to avoid preferences for any foods. Finally, as other researches exhibit gender bias, we decided to include females to create a situation in which results might be more generalizable and establish gender-specific evidence-based guidance. However, we had limitations regarding the methodological issues. First, our study was a cross-sectional research and, therefore, no causal association between personality traits and dietary habits or food preferences can precisely be addressed. Second, the economic status of the participants in the questionnaire was not assessed. Conclusion Higher conscientiousness and lower neuroticism and openness contribute to healthier dietary habits. In addition, high scores on conscientiousness and agreeableness and low scores on neuroticism are related to healthier food preferences. From a clinical viewpoint, assessment of personality traits could be useful to identify individuals that may be at risk of unhealthy dietary habits. Tailored nutrition education is, therefore, suggested based on an appropriate approach, which takes into account individual trait differences to modify the dietary habits and food preferences in order to improve the individuals' health and prevention of chronic disease in students. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
2021-09-29T05:21:55.113Z
2021-07-29T00:00:00.000
{ "year": 2021, "sha1": "edfeabb1250c66093b78a9a328ccc6acdf9ccbda", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "edfeabb1250c66093b78a9a328ccc6acdf9ccbda", "s2fieldsofstudy": [ "Medicine", "Agricultural And Food Sciences", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118444062
pes2o/s2orc
v3-fos-license
Analytical maximum likelihood estimation of stellar magnetic fields The polarised spectrum of stellar radiation encodes valuable information on the conditions of stellar atmospheres and the magnetic fields that permeate them. In this paper, we give explicit expressions to estimate the magnetic field vector and its associated error from the observed Stokes parameters. We study the solar case where specific intensities are observed and then the stellar case, where we receive the polarised flux. In this second case, we concentrate on the explicit expression for the case of a slow rotator with a dipolar magnetic field geometry. Moreover, we also give explicit formulae to retrieve the magnetic field vector from the LSD profiles without assuming mean values for the LSD artificial spectral line. The formulae have been obtained assuming that the spectral lines can be described in the weak field regime and using a maximum likelihood approach. The errors are recovered by means of the hermitian matrix. The bias of the estimators are analysed in depth. INTRODUCTION The most precise measurements of stellar magnetic fields are based on the observation and the interpretation of polarisation in spectral lines. Most of the magnetic fields detected in stars at different phases of evolution have been found by means of the Zeeman effect, which generates linear and circular polarisation in the presence of a magnetic field (e.g., Wade et al. 2000;Bagnulo et al. 2002;Jordan, Werner & O'Toole 2005;Aznar Cuadrado et al. 2004;O'Toole et al. 2005;Silvester et al. 2009;Leone et al. 2011). Here we present analytical expressions for inferring the magnetic field vector from the observed Stokes profiles induced by the Zeeman effect in the weak field approximation. The weak field approximation is broadly applied for the inference of solar and stellar magnetic fields from the observation of the Stokes profiles. It is an analytical solution to the radiative transfer equation whose main basic assumption is that the magnetic field is sufficiently weak throughout the whole region of the atmosphere where a spectral line is formed. Although simple, this approximation is very useful since non-magnetic mechanisms usually dominate the shape of spectral lines, in both the solar and the stellar case. For example, in the quiet Sun, which occupies the vast majority of the solar surface (far from the sunspots that harbour very strong fields), the spectral lines are well described by the weak field approximation. This has allowed the success of many synoptic magnetographs like those of Big Bear (Spirock et al. 2001;Varsik 1995). It is even used to produce modern vector magnetograms like those obtained with the IMaX instrument (Martínez Pillet et al. 2011) onboard the Sunrise balloon (Solanki et al. 2010). The weak field approximation has also been used to diagnose the chromosphere (Merenda et al. 2006;Asensio Ramos, Trujillo Bueno & Landi Degl'Innocenti 2008) due to the enhanced thermal width of the spectral lines. In night-time spectro-polarimetry, the weak field approximation is at the base of the least-squares deconvolution (LSD; Donati et al. 1997), the most successful technique used to detect and measure magnetic fields in solartype stars or other stars in which the polarisation signal per spectral line is well below the noise level. Other works have used this approximation to diagnose magnetic fields in a large variety of stellar objects. A limited selection includes some recent works on central stars of planetary nebulae (Jordan, Werner & O'Toole 2005;Leone et al. 2011), white dwarfs (Aznar Cuadrado et al. 2004), pulsating stars (Silvester et al. 2009), hot subdwarfs , Ap and Bp stars (Wade et al. 2000) and chemically peculiar stars (Bagnulo et al. 2002). THE WEAK FIELD APPROXIMATION The weak field approximation is an analytical solution to the radiative transfer equation. The fundamental assumption is that the magnetic field vector is constant and its intensity is sufficiently weak throughout the whole region of the atmosphere where an spectral line is formed. Additionally, the line-of-sight velocity and any broadening mechanism have to be constant with height in the line formation region. As a consequence, the magnetic field can be considered as a perturbation to the zero-field case. Quantitatively, the approximation holds whenever ∆λB ≪ ∆λD, where ∆λB is the Zeeman splitting and ∆λD is the dominant broadening mechanism (thermal, rotation, etc.). From its definition, it is clear that the weak field regime occurs at different field strengths for different spectral lines (depending on the sensitivity to the magnetic field, the local temperature and the atomic mass) and for different stellar objects (depending on any non-magnetic broadening mechanism for the spectral lines, the rotation being the most efficient mechanism in cool stars). To first order in ∆λB, the intensity profile of a spectral line formed in a weak magnetic field is insensitive to the magnetic field. In other words, it fulfils the transfer equation in the absence of a magnetic field. At this first order, the circular polarisation profile, i. e., the Stokes V profile, for a given spectral line has the following expression: The symbol B || stands for the longitudinal component of the magnetic field, i. e., B || = B cos θB, where θB is the inclination of the magnetic field with respect to the observer's line of sight ( Ω; see Fig. 1) and B is the magnetic field intensity. The symbol g represents the effective Landé factor of the line, which quantifies the magnetic sensitivity of the line. The effective Landé factor only depends on the quantum numbers of the transition (see, e.g., Landi Degl'Innocenti & Landolfi 2004). Equation 1 is expressed in terms of the generic wavelength variable x. If x represents the wavelength λ, then the parameter Λ equals the central wavelength of the line, λ 2 0 . However, it is customary in stellar spectropolarimetry to consider x as the velocity in Doppler units. In such a case, we have Λ = cλ0, with c the speed of light. The constant C = 4.67 × 10 −13 G −1Å−1 . At first order in ∆λB, both Stokes Q and U are zero. In order to obtain an expression for the Stokes profiles characterizing linear polarisation we have to expand the radiative transfer equation to second order in ∆λB and assume that the spectral line is not saturated. Under these assumptions, the general formulae for the Stokes Q and U are: The symbol G plays the role of the effective Landé factor for linear polarisation and quantifies the sensitivity of linear polarisation to the magnetic field. Again, it is only a function of the quantum numbers of the transition (see, e.g., Landi Degl 'Innocenti & Landolfi 2004). The symbol B ⊥ = B sin θB is the component of the magnetic field perpendicular to the line of sight. The angle φB represents the azimuth of the magnetic field vector with respect to an arbitrary reference direction ( ea, e b in Fig. 1 show the coordinates chosen for the reference of Q > 0). When observing the polarised light in resolved sources like the Sun, we detect specific intensities. However, when observing unresolved stars, the incoming polarised radiation is obtained as an integration in the plane of the sky of the individual Stokes parameters at each point of the stellar surface. As a consequence, the specific values of the Stokes flux vector depends on the surface distribution of the magnetic field, on the centre-to-limb variation (CLV) of the radiation, and the Doppler effect due to stellar rotation. In this paper we neglect rotation, the derived expressions being valid for those cases where the broadening mechanisms dominate over rotation. In the Sun, since we observe local profiles, the main broadening mechanism is typically thermal. In general, the polarimetric signals of 99% of the solar surface, the so-called quiet Sun, can be explained in terms of the weak field regime in most spectral lines in the visible and the near-IR. For non-resolved objects, the applicability depends strongly on the actual rotational velocity as compared with the temperature, the observed spectral line, the spectral resolution, the magnetic field organisation, etc. The thermal broadening depends on the square root of the ratio between the temperature and the atomic weight of the atom. Therefore, in general, the larger the temperature and the lightest the atom, the larger the allowed rotational velocity. For low resolution spectrographs (R ∼ 2000), basically the only remaining spectral lines (that are not wiped out by the lack of spectral resolution) are from hydrogen. For these lines, for a temperature of T ∼ 10000 K, the maximum line of sight velocity is 10 − 20 km s −1 . However, when observing metal lines at spectral resolutions as high as R ∼ 60000, the allowed velocities are one order of magnitude lower. It is customary to introduce a parametrised form of the CLV. In this work, we assume a quadratic form (e.g., Claret 2000;Cox 2000), which gives a good balance between the quality of the CLV and the simplicity of the analytical expressions presented in this paper for the polarised fluxes: where I0(x) is the intensity profile at disc centre (µ = 1). The parameters u and v have values between 0 and 1 and are supposed to be constant along the spectral line (although they can vary from line to line). Note that the values of u and v have to fulfil the condition that I(x) > 0. The CLV is given in terms of µ = cos Θ, where Θ is the astrocentric angle between the normal to a point in the stellar surface and the line of sight. Using this law for the CLV (and neglecting rotation) the flux for Stokes I is: where the integral is computed over the visible surface in the plane of the sky Σ ′ . For an arbitrary function h(ρ, α) of the polar coordinates of the stellar surface ρ and α, we have Note that the variable ρ = sin Θ, which means that µ = 1 − ρ 2 . Plugging the quadratic expression considered for the CLV, the final closed expression for the integrated Stokes I is found to be: Maximum likelihood estimation of stellar magnetic fields 3 Figure 1. Geometry of the stellar model. The symbol Ω represents the line of sight. The vector B displays the magnetic field vector in the stellar surface. Its geometry is described in terms of the inclination with respect to the line of sight θ B and the azimuthal angle φ B (the axis ea being the 0 reference for the azimuth). The position of a point P in the stellar surface (Σ) is defined only by the astrocentric angle Θ. Its projection (P') on the plane of the sky (Σ ′ ) is represented in cylindrical coordinates by the modulus ρ and the angle α, which is referring to the ea and e b axis. Integrating Eqs. (1) and (2) over the visible surface, we end up with the following expressions for the polarised flux: with the definitions: Therefore, the weak-field approximation for the integrated polarised flux remains formally the same but the components of the magnetic field B and B ⊥ appear weighted by the CLV law. In general, the components of the magnetic field may depend on the position on the stellar surface in a complicated manner, so that the previous integrals might not have closed expressions. In the simple case that the magnetic field is constant across the stellar surface, we recover B = B , B 2 ⊥ cos 2φB = B 2 ⊥ cos 2φB, and B 2 ⊥ sin 2φB = B 2 ⊥ sin 2φB. One of the simplest non-trivial configuration we can consider explicitly is that of a dipolar field. The magnetic field vector at each surface point is then: where the unitary vector e defines the orientation of the dipole and the unit vector r indicates positions on the stellar surface. The quantity H d represents the magnetic field strength at the poles of the dipolar field. From Eq. 7 and Eq. 9 and after some algebra, it is possible to write the flux of the Stokes parameters as: where H = H d cos θ d and H ⊥ = H d sin θ d , θ d being the inclination of the axis of the dipole while φ d is the azimuth of the dipole (for a similar derivation see Landolfi et al. 1993). The previous formalism has demonstrated that the weak-field approximation leads to formally the same expressions either if we consider specific intensities (that is applied whenever spatial resolution is available) or if we consider fluxes (whenever the object of interest cannot be resolved). In other words, the observed circular (linear) polarised spectrum is proportional to the first (second) derivative of the observed intensity through the longitudinal (orthogonal) component of the magnetic field. The only difference resides on the exact definition of the components of the field which, in the spatially unresolved case, are an average over the stellar surface weighted by the CLV. Therefore, for the sake of simplicity, it is possible to combine all expressions on the following general ones: The meaning of the newly defined variables is summarized in Tab. 1. ESTIMATION OF THE MAGNETIC FIELD VECTOR Once the model is set, our aim is to infer the magnetic field vector parametrised in terms of B , B ⊥ , and φ from Assuming that the weak field approximation can be applied for the set of observed spectral lines and that observations are corrupted with uncorrelated Gaussian noise, we can use a least-squares estimator (maximum likelihood) to retrieve the magnetic field vector (see the Appendix for more details). The χ 2 merit function is defined as: where the subindex j indicates the position along the coordinate x. The label mod refer to the model for Stokes profiles, and the superindex i labels the spectral lines. The previous merit functions consider the quite general case in which the standard deviation of the corrupting Gaussian noise is different for Stokes Q, U , and V and for each wavelength point xj. However, in practice, we have the simpler situa- Moreover, although the number of photons arriving in the line cores are smaller than in the far wings, we make the approximation that the noise variance is wavelength-independent. Furthermore, we also consider that it is independent of the considered line, so that σ i j = σ. These simplifications lead to less cluttered expressions for the inferred parameters. However, we point out that the general expressions that emerge from the optimization of Eq. (12) can be found in the Appendix. In order to infer a certain parameter, we have to find the global minimum of the χ 2 . This is trivially obtained by solving the non-linear system of equations obtained by setting the derivatives of the χ 2 function with respect to that parameter to zero. By so doing (see Appendix), we can obtain the expression of the magnetic field vector in terms of the observables: and the derived quantities: The phase shift φ0 is used to set the correct quadrant for the azimuth and depends on the sign of the numerator U = ij U i j I ′′i j and the denominator Q = ij Q i j I ′′i j as follows. If Q = 0 then: In the case Q = 0 then: If both Q and U are zero, the angle φ is obviously undefined. The estimated errors can be computed from the covariance matrix (e.g., Press et al. 1986). In the simple case we consider of equal wavelength-independent standard deviation for all spectral lines, the covariance matrix is diagonal (no correlation between the parameters), and the errors at a confidence level of 68.3% (one sigma) are expressed as: BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR It is widely known that all maximum likelihood estimators may suffer from biases. The bias is the difference between the value of the estimator and the true value of the parameter. Each individual parameter has to be studied separately using analytical/numerical simulations to understand to which extent the estimation of B , B ⊥ , and φ are subjected to bias. For simplicity the simulations we present in the following refer to the resolved case in which we use specific intensities. However, the behaviour of the estimator is the same also for the stellar case. In order to carry out the simulations, we focus on the Fe i line with central wavelength λ0 = 5250.2Å. This spectral line is produced by the transition 5 D0 − 7 D1. The values of the effective Landé factor for circular polarisation is g = 3, while it is G = 9 for linear polarisation. We consider a constant magnetic field strength of B = 500 G, which is sufficiently weak so that the weak-field approximation can still . The color code is the following: yellow represents the inversion of the profiles without noise. Orange displays the results for a noise level of 10 −5 Ic, pink for 5 × 10 −5 Ic, red for 10 −4 Ic, brown for 5 × 10 −4 Ic, dark green for 10 −3 Ic, light green for 1.5 × 10 −3 Ic, light blue for 2 × 10 −3 Ic, dark blue for 5 × 10 −3 Ic, light violet for 10 −2 Ic, and dark violet for 5 × 10 −2 Ic. The Ic is the continuum intensity. be considered. The inclination and the azimuth of the field are set to vary uniformly between 0 and 180 • . Letting the azimuth vary in this interval, we avoid the 180 • ambiguity in the azimuth present in the radiative transfer equation. For simplification, we consider a Gaussian intensity profile in the form: where dc = 1/2 and w = 0.05Å. The parameters for the Gaussian profile have been fixed to fit a solar observation obtained with the IMaX instrument. From the IMaX observational capabilities, it can be verified that the 5250.2Å line is in the weak-field regime up to ∼ 1 kG (with the line assumed to be in local thermodynamical equilibrium in a quiet Sun model atmosphere). We have synthesized 500 profiles for different combinations of the inclination and azimuth and different noise realisations. The noise added has a Gaussian statistics and we consider the effect of different standard deviations. We use Eqs. (13) to compute the inferred values of the parameters. Since we repeat the experiment for different realisations of the noise, we end up with a distribution of values for each parameter. We adopt the median value as the estimation of the parameter (the percentile 50 P50, i. e. , the value of the parameter that contains 50% of the area of the distribution). To quantify the dispersion produced by noise we use the percentiles 16 and 84 (which encompass one standard deviation around the estimated value). Contrary to B and φB (note that the notation for the magnetic field vector is the one associated to resolved sources), the transversal component of the magnetic field and hence the inclination angle presents a non-zero bias. Except for the case in which there is no added noise, small transversal components of the field (smaller or at the level of the noise amplitude) are overestimated. The fundamental reason is that the expression for B ⊥ in Eq. (13) is not robust against noise. It can be verified that if Q i j and U i j are at the noise level (can be described as Gaussian distributions with zero mean and variance σ 2 ), B ⊥ is a random variable following the probability distribution: The value of this distribution for a given percentile c fulfils: The percentile 50 (c = 0.5) correctly captures the value of the bias at small values of B ⊥ . Once this value is computed, if the inferred value of B ⊥ is similar to this value, one should be aware that the correct value of B ⊥ might be smaller. If this is not taken into account, the estimated inclination of the field is larger than the real one and artificially horizontal fields might be inferred. Note also that in the stellar case, the inferred inclination of the dipole axis would be more inclined than the real one. THE PARTICULAR CASE OF THE LEAST-SQUARES-DECONVOLUTION PROFILE Most detections of faint signals in stellar atmospheres have been evidenced by adding many spectral lines. The polarimetric signal per spectral resolution element is known to be well below the noise level, so that line addition is a must to fight against photon noise. The most widely used and successful technique that combines the information of many spectral lines is the Least-Squares Deconvolution (LSD) technique (Donati et al. 1997). The equations presented in this paper allow us to retrieve the magnetic field vector from the polarised stellar spectra taking into account many spectral lines. Thanks to it, we can rewrite the equations to directly extract information from the LSD profiles. The LSD technique is fundamentally based on the application of the weak-field approximation and on the assumption of a constant CLV for all spectral lines (in our case, u and v are set constant). This means that all Stokes profiles for each spectral line can be computed with a single spectral profile whose proportionality constant changes from line to line (see Donati et al. 1997): where η i is the line depth andĪ,Q,Ū, andV are the LSD profiles that can be computed using a least-squares procedure as explained in Donati et al. (1997). We refer to Kochukhov, Makaganiuk & Piskunov (2010) for an in-depth analysis of the assumptions and potential problems of LSD. From the previous equations, it is easy to show that it is possible to infer the magnetic field vector using: where K = 1 and K ′ = 1/4 for the case of an unresolved star with constant magnetic field and for the stellar dipole. Note that it is not necessary to assume averaged atomic parameters for the LSD profile, treating it as a mean spectral line. In our case, the estimated magnetic field vector only depends on the observables, the atomic parameters of each observed spectral line (which is necessary to compute the LSD profile), and the assumed CLV coefficients. ILLUSTRATIVE EXAMPLES The inference power of the expressions developed in the previous sections are illustrated with the aid of two different examples. The first one consists of a simulated stellar dipole using a Milne-Eddington (e.g., Landi Degl'Innocenti & Landolfi 2004) atmosphere and the second one is a particularly interesting observational example in which we can illustrate the effect of the bias in the transverse component of the magnetic field with spatial resolution. Considering the stellar dipole, we simulate a Milne-Eddington atmosphere with a source function that varies as: For simplicity, we choose β = 1 and consider a static atmosphere at all points of the stellar surface. We assume a spectral line centred at λ = 5000Å with a Doppler broadening of 0.04Å . Both the circular and linear effective Landé factors are equal to 1. The CLV variation of the Milne-Eddington atmosphere (that we have forced to be wavelength independent) gives u = 2/3. We integrate the Stokes signals on the visible stellar surface and we add Gaussian noise to the final integrated flux with a standard deviation of 5×10 −5 in units of the intensity flux at the continuum, Fc. Figure 3 shows the flux of the simulated Stokes parameters coming from a dipole field with the following parameters: H d = 1500 G, θ d = 80 • , and φ d = 25 • . The black lines represent the synthetic fluxes, and the rhombs the synthetic observations with added noise. We invert the noisy Stokes fluxes using Eqs. 13, 14 and 17 and obtain H || = 265.5 ± 2.4 G, H ⊥ = 1501.6 ± 25.7 G, φ d = 27.2 • ± 2.0, and the derived quantities H d = 1524.9 ± 25.3 G and θ d = 80.0 • ± 0.2. All quantities are nicely recovered. In fact, the bias estimations for the perpendicular component H ⊥ for a the percentiles 16, 50, and 84 (i. e. containing one sigma probability) are 302.0, 426.5, and 543.8 G, respectively. This means that the computed H ⊥ is reliable. Now, we assume a weak dipole with H d = 100 G, θ d = 20 • , and φ d = 25 • , and the same noise level. The inferred parameters are H || = 99.7 ± 2.3 G, H ⊥ = 333.9±109.0 G, φ d = 39.8 • ±37.4, H d = 348.3±104.5 G, and θ d = 73.4 • ± 5.1. In principle, both the longitudinal and the transverse components (as well as the inclination) should be well recovered but the azimuth remains undetermined. However, the bias of H ⊥ for the percentiles 16, 50, and 84 are 293.2, 414.1, and 528.0 G, respectively. Therefore, the inferred perpendicular component of the dipole is consistent with a bias. This implies that the inferred value, although having a small (Gaussian) error, it has to be considered an upper limit. In this case, we know that the perpendicular component of the dipole is very small (34 G). Consequently, we overestimate both the perpendicular component and the intrinsic strength of the dipole. Additionally, the dipole appears to be much more inclined that in reality. The second experiment consists of filter-polarimetric data observed with the IMaX instrument on the Sunrise mission. This polarimeter observes the 5250.2 Fe i line (for which we have carried out the bias experiment in Section 4). The data set consists of the four Stokes parameters observed at the quietest areas of the solar disc centre at a spatial cutoff frequency of about 0.15-0.18 ′′ (∼ 120 km in the solar surface, the best one at the moment for instruments with polarimetric capabilities). The noise level in circular and linear polarization is 10 −3 in units of the continuum intensity, Ic. The left panel of Fig. 4 shows the continuum intensity image, with brighter areas associated to granular regions, where the plasma ascends to the photosphere. Dark areas are the intergranular lanes, where the motions are preferentially downflowing. The right panel of this same figure displays the estimated bias for the transversal component of the magnetic field using Eq. 20 for the percentile c = 0.5. The bias has a very particular spatial distribution which mimics reversed granulation (bright areas become dark and viceversa). This particular spectral line is strongly sensitive to the temperature, becoming very deep and narrow in intergranular lanes and less deep and broad in granules. The dependence of Eq. 20 on the second derivative of the intensity profile, which gives an idea of the width of the spectral line, is the reason why the bias is larger in intergranules and less important in granules. Left panel of Fig. 5 displays the inferred longitudinal component of the magnetic field. As can be seen, the noisy background has values around zero, consistent with the fact that the estimator of this quantity is unbiased. The central panel shows the inferred transversal component of the magnetic field. The noisy background is now filled by magnetic fields that have quite intense values, illustrating the network pattern of the bias. If this bias for the longitudinal field is not appropriately account for, this might lead to an artificial excess of inclined magnetic fields. This effect could be affecting some of the recent magnetic field inferences in the quiet Sun observed with Hinode (see, e.g., Orozco Suárez et al. 2007;Lites et al. 2008;Sheminova 2009;Ishikawa & Tsuneta 2009, 2010. Since we can characterize the bias by its median value, it is possible to distinguish real signals from false signals produced by the presence of noise. The way we proceed is as follows. We compute a conservative upper limit for a trustful field as the bias for the percentile 84 (c = 0.84 in Eq. 20). This value changes from pixel to pixel. Then, we force B ⊥ = 0 in those places where the inferred B ⊥ is smaller than the bias. The result is represented in the right panel of Fig. 5. Now, the real signals (coming from pixels with linear polarization clearly above the noise level) are much more evident and most of the background has disappeared. Note that we have only removed those signals that were produced by the presence of noise (the expected value of B ⊥ = 0 should be zero). However, if B ⊥ = 0, the bias can still be important if the noise level is high, as shown in Fig. 2. CONCLUSIONS We have shown that the weak field approximation (the Stokes parameters are proportional to the first and second order derivative of the intensity) also holds for observed stellar fluxes in the case of slow rotators. We have used a maximum likelihood estimator to infer the magnetic field vector from the observed Stokes profiles. The main result of this papers is that we give explicit formulae for the components of the magnetic field vector in terms of the observables. The formulae are general and hold for specific intensities, for integrated fluxes and are slightly modified for LSD profiles. In the particular case of a stellar dipole, the orientation of the dipole axis and its intensity can be recovered from one observation of the full Stokes vector. We have also studied the bias of this maximum likelihood estimator. The longitudinal magnetic field and the azimuthal angle are unbiased quantities. However, the transversal component of the magnetic field and hence the inclination of the field are overestimated in the presence of noise. We derive the estimated value of the perpendicular component of the magnetic field in the case that there is no linear polarization signal above the noise (the bias for B ⊥ =0). We propose to evaluate this bias prior to the inference of the perpendicular component of the field. One should be very cautious when the inferred B ⊥ is of the order of the bias. ACKNOWLEDGMENTS We are grateful to F. Leone for helpful comments. This work has been funded by the Spanish Ministry of Science and Innovation under projects AYA2010-18029 (So-lar Magnetism and Astrophysical Spectropolarimetry) and Consolider-Ingenio 2010 CSD2009-00038. APPENDIX A: DERIVATION OF THE MAXIMUM LIKELIHOOD SOLUTION OF THE STELLAR MAGNETIC FIELD We start from the general weak-field equations that hold both for the resolved (solar) case and for the integrated stellar dipole (Eq. 11): where the explicit expressions for V, Q, U, I ′ , and I ′′ , for each case can be found in Table 1 in the main text. We denote the Stokes vector by S = [I, Q, U, V]. Assuming that the difference between the data and the model follows a Gaussian distribution, the likelihood (the probability distribution of the data given the parameters) can we written as: where the index i refers to the spectral line, j refers to the wavelength, and k to the element of the Stokes vector. The abbreviation "mod" stands for the model. The symbol σ 2 stands for the variance of S − S mod . Taking the logarithm, we build the log-likelihood, ln L: In order to estimate the parameters that fit the data given the proposed model we have to maximise the likelihood or, equivalently, minimise ln L. Let p = B , B ⊥ , φ denote the set of parameters of our model. Following the standard approach, to estimate the vector of parameters p, we must solve the following set of equations: There is an important point to clarify given that, in our case, the model is not fully analytical because it depends on the observed intensity profile (Asensio Ramos & Manso Sainz 2011). Therefore, assuming no correlation between the model and the observable (the only difference being produced by uncorrelated Gaussian noise), the variance for each Stokes parameter is given by: Thanks to the previous equations, the variances that appear in Eq. (A4) depend on the actual parameters, which makes the minimisation much harder. Luckily, in the weak field regime and for the observational spectral resolution of interest nowadays, it is easy to verify that σ 2 (S mod i ) σ 2 (Si). In any case, this condition should be checked before carrying out any inversion using the formulae derived in this work. Assuming a first order approximation to the derivative I ′ , we find: where K = 1 for the resolved case and K = 1 10 15+u 6−2u−3v for the dipole case. Assuming σ 2 (S mod i ) σ 2 (Si) and that the noise in intensity is the same as in the circular polarisation, it holds that: which means that the spectral sampling has to be larger than or of the order of the Zeeman splitting. For instance, for a wavelength of 5000Å, a Landé factor of 1.5, and B || = 500 G, the spectral sampling has to be larger or equal to 12 mÅ for K = 1. Luckily, this is the case in most of the observational cases. For example, for the same wavelength, the expected sampling for two spectrographs with a resolving power of R = 60000, 300000 are 83 mÅ and 16 mÅ, respectively. For the linear polarisation we have, if we assume that their associated noise equals the noise in intensity, that: where K ′ = 1/4 for the resolved case and K ′ = 1 4480 420−68u−105v 6−2u−3v for the dipole case. For B ⊥ = 500 G, φ = 0 • and G = 2.25, we end up with ∆x = 6 mÅ. Taking the previous considerations into account, Eq. (A4) can be simplified to: where the well known χ 2 merit function is defined as: Explicitly, the derivatives of the χ 2 with respect to the parameters we want to infer are: By forcing these derivatives to zero, we obtain the maximum-likelihood estimate of each parameter. For the longitudinal magnetic field, using Eq. A13, we obtain a unique solution: Dividing Eq. A14 and A15, we obtain a solution for the azimuth: Dividing Eq. A15 by cos 2φ and using Eq. A17, after some algebra we obtain two solutions. One is B ⊥ = 0, which is not valid since this solution maximises the χ 2 . We have tested this computing the second derivative. The solution that minimises the χ 2 is given by: The errors associated with each parameter are computed assuming that the surface around the maximum likelihood is approximately a multidimensional Gaussian. This is equivalent to assuming a parabolic approximation to the χ 2 close to the minimum. The curvature close to the minimum is given by the Hessian matrix ζ, whose elements are: M. J. Martínez González The matrix can be calculated using the following second derivatives: The square root of the diagonal of the covariance matrix gives the error estimates for each parameter. Such covariance matrix is just the inverse of the Hessian matrix, C = ζ −1 . In our case: with The error bars are thus given by: with ∆ = 1, 1.65, 2, 2.57, 3, and 3.89 for a confidence level of 68.3%, 90%, 95.4%, 99%, 99.73%, and 99.99%, respectively. This computation assumes that, to estimate the error of a parameter, we fix the values of the rest to the ones that maximize the likelihood and compute the confidence levels for the one-dimensional probability distribution of this parameter (see Press et al. 1986, for more details). Note that this approximation does not take into account the degeneracies between parameters since it does not integrate the probability distribution of the rest of the parameters but fixes a certain value (see Asensio Ramos 2011, for a robust Bayesian inversion). Note also that the covariance matrix is diagonal if the standard deviation of Q and U are the same (σ i Qj = σ i U j ). This is generally the case in solar observations in which the measurement efficiencies for linear polarization are similar.
2011-08-22T16:12:33.000Z
2011-08-22T00:00:00.000
{ "year": 2011, "sha1": "99f295e8a3e1276562279b6e2c0eb0f6511ba3aa", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/419/1/153/18717224/mnras0419-0153.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "99f295e8a3e1276562279b6e2c0eb0f6511ba3aa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
72827646
pes2o/s2orc
v3-fos-license
Transfusion regimens in thalassemia intermedia Thalassemia intermedia (TI) is a heterogeneous disease, in terms of both clinical manifestations and underlying molecular defects. Some TI patients are asymptomatic until adult life, whereas others are symptomatic from early childhood. In contrast with patients with Thalassemia major (TM), the severity of anemia is less and the patients do not require transfusions during at least the first few years of life. Many patients with TI, especially older ones, have been exposed to the multiple long-term effects of chronic anemia and tissue hypoxia and their compensatory reactions, including enhanced erythropoiesis and increased iron absorption. Bone marrow expansion and extramedullary hematopoiesis lead to bone deformities and liver and spleen enlargement. Therapeutic strategies in TI are not clear and different criteria are used to decide the initiation of transfusion and chelation therapy, modulation of fetal hemoglobin production, and hematopoietic stem cell transplantation on an individual basis. The clinical picture of well-treated TM patients with regular transfusionchelation therapy is better from TI patients who have not received adequate transfusion therapy. There is a significant role of early blood transfusion to prevent and treat complications commonly associated with TI, such as extramedullary erythropoiesis and bone deformities, autoimmune hemolytic anemia, leg ulcers, gallstones, pseudoxantoma elasticum, hyperuricosuria, gout and pulmonary hypertension, which are rarely seen in thalassemia major. Nowadays, indications of transfusion in patients with TI are chronic anemia (Hb < 7 g/dL), bone deformities, growth failure, extramedullary erythropoiesis, heart failure, pregnancy and preparation for surgical procedures. Conclusion: Adequate (regular or tailored) transfusion therapy is an important treatment modality for increasing the quality of life in patients with thalassemia intermedia during childhood. Introduction Thalassemia intermedia was first described in 1955 by Rietti-Greppi-Micheli as too hematologically severe to be called minor, but too mild to be called major. The severity of clinical features is greatly variable, and depends primarily on the underlying molecular defects such as genotype/phenotype correlation. Three common mechanisms for the pathophysiology of TI are the inheritance of mild β-thalassemia alleles, co-inheritance of a-globin gene mutations and the inheritance of genetic determinants causing high level production of HbF. Clinical picture of TI includes chronic anemia, ineffective erythropoiesis and iron overload.][3] Non-transfusional iron overload in the liver and less so in the heart, develops due to increased gastrointestinal iron absorption. 4atients with thalassemia intermedia have a poor appearance and quality of life compared with thalassemia major.Complications occur particularly later in life.They are less common in adequately transfused patients.It is possible that in the future, an adequate transfusion program will become a more common option for management and prevention of late complications. The difference between thalassemia major and intermedia treatment is mainly regular transfusion.Thalassemia major is a lifelong transfusion-dependent disease.On the other hand, TI is a heterogeneous disease; some are asymptomatic and others are transfusion dependent.Currently no clear guidelines are available on transfusion regimens in TI. [1][2][3] There are some questions waiting to be answered: i) Is transfusion therapy required?ii) Is transfusion therapy a routine treatment approach for patients with thalassaemia intermedia?iii) What are the risks/benefits of transfusion therapy?iv) Which regimen of transfusion therapy should be used?(tailored or regular). Patients and Methods Twenty-one patients with thalassemia intermedia were evaluated retrospectively.Patients aged between 10 and 53, have been followed at Istanbul University School of Medicine, Pediatric and Adult Hematology/Oncology Thalassaemia Unit. Patients were diagnosed between the age of 2 and 14 years.Seven of them were younger, fourteen of them were older than 18 years old (Table 1). Results Amount of transfusion were greatly variable in patients with TI.Three patients received more than 50 transfusions, whereas four of them had no transfusion (Table 2).The rate of gallstones and the spleen condition in patients with non-transfusion dependent TI are shown in Table 3. Types of complications of Thalassemia Intermedia were related to age.Autoimmune hemolytic anemia, thalassemic physical appearance and hypersplenism were seen under the age of 10, during childhood.Many complications such as endocrinopathies and heart disease related with iron overload, extramedullary erythropoiesis, cholelithiasis, thromboembolism and leg ulcers developed between 10-50 years of age; pseudoxanthoma elasticum and pulmonary hypertension occurred after 50 years of age during adulthood.Complications in our patients with non-transfusion dependent TI are shown in Table 4. Long term complications of TI may be severe and irreversible.Early treatment with blood transfusion may prevent the complications.Relationship between the amount of transfusions and complications and the types of mutations are shown in Table 5. We evaluated the relationship between transfusions and Hb level at Table 6. Transfusion indications in our patients with non-transfusion dependent thalassemia intermedia are shown in Table 7. Discussion There is no adequate clinical definition of TI.TI has a broad clinical spectrum, and different transfusion regimens are used according to these conditions.There are three transfusion therapy options in patients with TI: no transfusions, intermittent or tailored transfusions and regular transfusions, although they have some risks and benefits (Table 8). i) If Hb level persists between 9-10 g/dL with splenomegaly, it is called mild TI.They don't require transfusion.ii) If Hb levels are in the 5-6 g/dL range, and the presentation is relatively late, with growth failure and gross skeletal deformities, it is severe TI.These children should be transfused to avoid complications (treated like TM). iii) If Hb values are between 6 and 9 g/dL and growth and development are reasonably well, it is named as moderate TI and in these patients, the decision to transfuse is challenging.There is no routine treatment approach which provides significant benefits.Transfusions may be required if complications develop.Transfusions may also become necessary with advancing age, during infection and pregnancy, and when hypersplenism develops.Diagnosis can be made after an observation and often requires revision. Thalassemia phenotypes and transfusion requirements are shown in Figure 1.Answers to the following questions are not clear yet: i) Do transfusions prevent complications?ii) Are the complications increased by transfusions?iii) When the patients with non-transfusion dependent TI have to receive transfusion? Patients We have shown that decision of transfusion is not difficult in mild or severe forms of thalassemia intermedia.An individual treatment modality is required in the moderate form of the disease. Today, the approach of transfusion therapy in TI does not seem to be appropriate: i) Avoiding early blood transfusions and concomittant requirement for chelation therapy.ii) Reserving the transfusion until later in the course of the disease, when complications manifest. The decision to initiate transfusion therapy in TI should be based not only on the Hb level but also on signs and symptoms of anemia, patient's condition (particularly with respect to activity, failure of growth and development), and early appearance of skeletal changes or other disease complications.Indications to transfuse regularly in thalassemia intermedia are chronic anemia (Hb < 7 g/dL), bone deformities, growth failure, extramedullary erythropoiesis, heart failure and pregnancy. 1aluation of the role of transfusion therapy in the management of TI has been limited, in contrast with TM.In the OPTIMAL CARE study, patients who were placed on transfusion regimens (intermittent or regular) suffered fewer complications relevant to chronic anemia, ineffective erythropoiesis, and hemolysis (mainly extramedullar hematopoiesis, pulmonary hypertension (PHT), and thrombosis), but suffered while a higher rate of iron overload related endochrinopathy. 5- 7Observational studies have also confirmed that transfused TI patients suffer fewer thromboembolic events, PHT, and silent brain infarcts as compared to transfusion-naive patients.Blood transfusion in patients with TI will require closer monitoring and should be individually tailored to meet the patient's needs.Alloimmunization is a relatively common observation in TI, and the risk is decreased if transfusion therapy is initiated before the age of 12 months. 1atients with thalassemia intermedia may benefit from an individually tailored transfusion regimen, compared with the regular transfusion regimens implemented in thalassemia major, to help prevent transfusion dependency.
2019-03-10T13:14:58.087Z
2012-01-13T00:00:00.000
{ "year": 2012, "sha1": "a72aefe7d6b3b36802d478b950ae60cc292541d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2039-4365/1/12/e14/pdf?version=1638159041", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a72aefe7d6b3b36802d478b950ae60cc292541d2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6752116
pes2o/s2orc
v3-fos-license
Changing Pattern of Hepatitis A Virus Epidemiology in an Area of High Endemicity Background Continuous assessment of hepatitis A virus (HAV) seroepidemiology is a useful tool to control the risk of infection. Objectives This study aimed to evaluate the changing patterns of anti-HAV seroprevalence in a population,which isgenerally considered to be anarea ofhigh endemicity. Patients and Methods Overall, the results of 3349 sera collected during the period 2005-2008 from patients attending the University Hospital of Cagliari, Italy were studied; their mean age was 52.7 years, (s + 16.22). Patients with liver disease were excluded from the study. Age specific seroprevalence results were compared with those observed in similar previous studies carried out in the same area. Results The overall prevalence of anti-HAV was 74.6% with consistently lower values in subjects younger than 40 years (17.5%; P < 0.0001) particularly in those under 30 years of age (8.9%, CI 5.8-11.9). A significant declining trend in age specific seroprevalence has been foundin people under 30 years;61% in 1988, 33% in 1995 and 8.9% in 2005-2008. Conclusions Our findings show that a significant decline inherd immunity has occurred in the last 20 years as a consequence of lower HAV circulation due to improvementsin socio-economical and hygienic conditions. Adolescents and young adults are becoming increasingly susceptible to HAV infections, as recent outbreaks of acute HAV hepatitis have occurred. Persistent environmental monitoring and the implementation of prevention measures must be considered in order to contain the risk related to this epidemiological shift. Background Hepatitis A is generally an acute, self-limiting liver infection transmitted through the faecal-oral route by a picornavirus, the Hepatitis A Virus (HAV), which causes 10 million infections worldwide each year (1,2). The clinical severity of the HAV infection varies from an asymptomatic infection to a fulminant fatal disease (3,4) and age is the major factor that influences the clinical course of the primary HAV infection; it is symptomatic in only 4-16% of children compared to 75-95% of adults (4). The degree of endemicity is closely related tothe prevailing hygiene and sanitary conditions, socio-economic level and other development indicators (5). In recent decades Italy has experienced a declining trend inHAV epidemiology (6), this is probably related to improvementsin health and Campagna M et al. Decline of Herd Immunity Against HAV Infection sanitary conditions, which have been responsible for the progressive decline in infection rates among children under 14 years of age and for a major shift towards the highest incidence in susceptible teenagers and young adults (7). The integrated epidemiological system for acute viral hepatitis surveillance (SEIEVA) reports that the incidence of HAV has declined from 10/100000 in 1985 to 3.6/100000 in 2004, and to 1.1/100000 in 2010, with an increase during 1996-1997 corresponding to a large outbreak which occurred in two regions of southern Italy (Puglia and Campania). Epidemiological patterns vary among the different regions within Italy, with low endemicity areas in the central and northern regions and intermediate endemicity in the southern and insular regions. The most frequently reported risk factor is the consumption of contaminated seafood, in particularraw or partially cooked shellfish (8-11). Objectives The aim of the present study was to assess the anti-HAV seroprevalence rates in a sample of population in Southern Sardinia, a major island of Italy, to compare the pattern of immunity with those reported by studies carried out in the same area during the last 20 years, to compare our findings to seroprevalence data from Italy and to determine the most appropriate preventive measures to control and manage the risk of HAV infection. Patients and Methods The results of anti-HAV tests were retrospectively collected from 3349 patients attending the University Hospital of Cagliari, Italy. All subjects affected by or suspected of hepatic disease were excluded from the study in order to avoid possible selection bias resulting in limitations in statistical inference. For the purposesof thisstudy, results were collected anonymously and only the age and gender of subjects and sample date were recorded. The samples were tested for anti-HAV IgG by a commercially available MicroparticleEnzyme Immunoassay (MEIA; AxSYM-HAVAB 2.0, ABBOTT). All specimens were tested for anti-HAV-IgG: 135 of them (4%) had equivocal analytical results and were excluded; they were equally distributed in the age classes. Overall, age-specific and gender seroprevalence rates were computed. In order to compare results with previous prevalence rates, data from studies carried out in 1988 (12), 1995 (13) and 1999-2000 (14) were considered. The statistical analysis was performed using a SPSS version 10 program. In order to assess differences in the seroprevalence rates a chi-squared test was used, a significant difference was consideredwhen P < 0.05. A95% confidence interval (CI) and the Poisson confidence interval, were used when the number of observations was lower than ten. Results After the exclusion of the subjects with equivocal analytical results 3214 subjects (1657 males and 1557 females) were included in the study; their distribution according to age and gender is reported in Table 1 Discussion Overall our data shows a declining trend of herd immunity with regard to HAV infections in the population of South Sardinia during the last 20 years. This dramatic decline was most relevant in young adults and was a result of decreased exposure to HAV during the first years of life. This was probably due to the improvementsin socioeconomic and sanitary conditions which have occurred in this area during the past decades and it confirms the changing pattern of immunity, shifting Sardinia from a high to a low HAV endemicity area. Nonetheless, the lower rate of immunity in young adults suggests the persistence of a relevant HAV related risk, due to the marked decline of herd immunity. Our study shows a higher immunity rate compared with seroprevalence values reported in 1999-2000 possibly as the result of an increase in the circulation of HAV in the exposed population from 2000 to 2008. This is also confirmed by reports from thelocal health service regarding outbreaks which occurred in Sardinia in the last few years as a result of raw shellfish consumption, with 94% (31/33) of the cases occuring in young-adults (< 40 years). Several studies have shown thateating raw seafood is the main risk factor for acquiring an HAV infection in Southern Italy (9-11) and a previous survey showed that Southern Sardinia's population regularly eats uncooked shellfish and this was a relevant factorin a proportion of cases (34.7%) (14). Moreover, some authors have suggested that young adults are exposed more of ten than older people to other well-known risk factors of HAV infection such as; use of intravenous drugs, occupational exposure, homosexual practices and multiple and occasional sexual contacts (15,16). Paradoxically, our results confirm that as HAVinfections become less common, the risk of new infections shifts from children to adults, and the risk of acute,clinically severe hepatitis A increases (17). Our study shows ahigher overall rate of immune subjects in Sardinia compared to the rest of Italy. However, this may be due to the higher proportion of subjects older than 40 years in our study population. In fact, considering immunity rates by age class, we observed a higher anti-HAV seroprevalence in the over 40 years age class and a significantly lower seroprevalence rate in subjects younger than 40 years (especially in the 20-29 and 30-39 age class) compared to the rest of Italy and the majority of other European countries (18). This different seroprevalence picture suggests the need to adopt tailored preventive strategies based on specific risk assessments. In order to reduce the risk of HAV infections in an epidemiological picture characterized by high rates of susceptible subjects among adolescents and young adults, vaccination of household contacts of sporadic cases and vaccination of individuals at higher risk of infection or at risk of complications of HAV hepatitis, in particular those individuals affected by underlying liver disease, represents the main preventive measure. In Italy the current National Vaccination Plan 2005-2008 (19) recommends vaccination against HAV only for specific population groups (travellers to endemic areas, drug users, men who have sex with men (MSM), soldiers, sewage workers, patients presenting with liver disease, recipients of liver transplants and HAV-negative haemophiliacs) (19,20). In Puglia, a region of Southern Italy, where large epidemics of hepatitis A occurred in the middle 90's, a mass, free of charge mass vaccination program for newborns (15-18 months) and adolescents (12 years) was introduced in 1998, as part of the routine immunisation schedule (21). Actually, the current epidemiological situation does not suggest the need for mass vaccination of newborns and adolescents either in Sardinia or in the rest of Italy, and a vaccination program for at risk groups would be more suitable for the control of this virus in young adults (22). However, in Sardinia, the very low herd immunity picture suggests the need to implement educational campaigns for subjects at higher risk and about dietary habits (to eat only cooked shellfish) as well as the implementation of controls by the local health services regardingsanitation when harvesting shellfish and outlets, in order to avoid this source of infection. Moreover, despite improvementsin hygiene conditions, dietary habits and higher standards of agriculture and manufacturing, the HAV risk associated with importing food from countries with lower standards of environmental hygiene and higher levels of HAV, endemicityremains high. In a global economy fruits and vegetables imported from developing countries can pose a serious risk of an HAV outbreak in a population with no herd immunity, thus allowing the spread of HAV infections from endemic to nonendemic areas (17). Our study has some strengths and limitations. The main limitation is whether or not the study population is representative of the Southern Sardinia population. In this regard, a possible limitation to our study sample could be that we did not include subpopulations with a high prevalence of HAV infection (i.e., prison inmates and residents of institutions). Likewise, travellers, drug users, people of low socio-economic status and people with health problems may have been overly represented, since a significant rate of serum samples were taken from individuals who received care in hospital or for specific screening. Despite these limitations, the anti-HAV distribution that we observed probably reflects the overall picture of prevalence in the different age groups of Southern Sardinia. In this scenario, seroprevalence studies can provide the most accurate picture of the circulation of HAV in a given population and represent the most appropriate tool for risk assessment. It is in fact suggested that routine surveillance of HAV infection, based solely on the reporting of symptomatic cases who seek medical care, could underestimate the risk (18).
2016-05-12T22:15:10.714Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "6b8c4d983f5555705bf3ef647777a2be3cf69abc", "oa_license": "CCBY", "oa_url": "http://cdn.neoscriber.org/cdn/serve/3144c/8bcd8615b2fcaf9cff8d7874ce361b3ac1c488c5/70366-pdf.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6b8c4d983f5555705bf3ef647777a2be3cf69abc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15843945
pes2o/s2orc
v3-fos-license
The Renormalization Group Equation in N=2 Supersymmetric Gauge Theories We clarify the mass dependence of the effective prepotential in N=2 supersymmetric SU(N_c) gauge theories with an arbitrary number N_f<2N_c of flavors. The resulting differential equation for the prepotential extends the equations obtained previously for SU(2) and for zero masses. It can be viewed as an exact renormalization group equation for the prepotential, with the beta function given by a modular form. We derive an explicit formula for this modular form when N_f=0, and verify the equation to 2-instanton order in the weak-coupling regime for arbitrary N_f and N_c. I. INTRODUCTION New avenues for the investigation of N=2 supersymmetric gauge theories have recently opened up with the Seiberg-Witten proposal [1], which gives the effective action in terms of a 1-form dλ on Riemann surfaces fibering over the moduli space of vacua. Starting with the SU(2) theory [1], a form dλ is now available for many other gauge groups [2], with matter in the fundamental [3] [4] or in the adjoint representation [5]. This has led to a wealth of information about the prepotential, including its expansion up to 2-instanton order for asymptotically free theories with classical gauge groups [6]. These developments suggest a rich structure for the prepotential F , which may help understand its strong coupling behavior, and clarify its relation with the point particle limit of string theories, when gravity is turned off [7]. Of particular interest in this context are the non-perturbative differential equations derived by Matone in [8] for SU (2), and later extended by Eguchi and Yang in [9] to SU(N c ) theories with only massless matter. It was however unclear how these equations would be affected if the hypermultiplets acquire non-vanishing masses. In the present paper, we address this issue by providing a systematic and general framework for incorporating arbitrary masses m j . In effect, the masses m j are treated on an equal footing as the vev's a k of the scalar field in the chiral multiplet, since they are both given by periods of dλ around non-trivial cycles. For the masses, the cycles are small loops around the poles of dλ, while for a k , they are non-trivial A-homology cycles. This suggests that the derivatives of F with respect to the masses should be given by the periods of dλ around "dual cycles", just as the derivatives of F with respect to a k are given by the periods of dλ around B-cycles. We provide an explicit closed formula for such a prepotential, motivated by the τ -function of the Whitham hierarchy obtained in [10]. (In this connection, we should point out that intriguing similarities between supersymmetric gauge theories and Whitham hierarchies had been noted by many authors [11], and had been the basis of the considerations in [9], as well as in [4], the starting point of our arguments). Written in terms of the derivatives of F , this closed formula becomes the non-perturbative equation for F that we seek. It can be verified explicitly to 2-instanton order, using the results of [6]. Specifically, the differential equation for F is of the form The right hand side in (1.1) has been interpreted in [8] [9] in terms of the trace of the classical vacuum expectation value N c k=1ā 2 k , although there are ambiguities with this interpretation when N f ≥ N c . Mathematically, it can be expressed in terms of ϑ-functions for arbitrary N c when N f = 0 (c.f. Section III (c) below). There is little doubt that this should be the case in general. Now we have by dimensional analysis if Λ is the renormalization scale of the theory. Thus the proper interpretation for the equation (1.1) is as a renormalization group equation, with the beta function given by a modular form! Finally, we observe that the effective Lagrangian in the low momentum expansion determines the effective prepotential only up to a k -independent terms. However, masses can arise as vacuum expectation values of non-dynamical fields, and we would expect the natural dependence on masses imposed here to be useful in future developments, for example in eventual generalizations to string theories. II. A CLOSED FORM FOR THE PREPOTENTIAL (a) The geometric set-up for N=2 supersymmetric gauge theories We recall the basic set-up for the effective prepotential F of N=2 supersymmetric SU(N c ) gauge theories. The moduli space of vacua is an N c −1 dimensional variety, which can be parametrized classically by the eigenvaluesā k , N c k=1ā k = 0 of the scalar field φ in the adjoint representation occurring in the N=2 chiral multiplet. (The flatness of the potential is equivalent to [φ, φ † ] = 0). Quantum mechanically, the order parametersā k get renormalized to parameters a k . The prepotential F determines completely the Wilson effective Lagrangian of the quantum theory to leading order in the low momentum expansion. Following Seiberg-Witten [1], we require that the renormalized order parameters a k , their duals a D,k , and the prepotential F be given by where dλ is a suitably chosen meromorphic 1-form on a fibration of Riemann surfaces Γ above the moduli space of vacua, and A j , B j is a canonical basis of homology cycles on Γ. In the formalism of [4], the form dλ is characterized by two meromorphic Abelian differentials dQ and dE on Γ, with dλ = QdE. For SU(N c ) gauge theories with N f hypermultiplets in the fundamental representation, N f < 2N c , the defining properties of dE and dQ are • dE has only simple poles, at points P + , P − , P i , where its residues are respectively −N c , N c − N f , and 1 (1 ≤ i ≤ N f ). Its periods around homology cycles are integer multiples of 2πi; • Q is a well-defined meromorphic f unction, which has simple poles at P + and P − , and takes the values Q(P i ) = −m i at P i , where m i are the bare masses of the N f hypermultiplets; • The form dλ is normalized so that where Λ is the dynamically generated scale of the theory, and is the holomorphic coordinate system provided by the Abelian integral E, depending on whether we are near P + or near P − . It was shown in [4] that these conditions imply that Γ is hyperelliptic, and admits an equation of the form Hereã k are parameters which coincide withā k when N c < N f , but may otherwise receive corrections. It is convenient to setΛ The function Q in dλ = QdE is now the coordinate Q in the complex plane, lifted to the two sheets y = ± √ A 2 − B of (2.3), while the Abelian integral E is given by E = log (y +A(Q)). The points P ± correspond to Q = ∞, with the choice of signs y = ± √ A 2 − B. (b) The prepotential in closed form We shall now exhibit a solution F for the equations (2.1) in closed form. Formally, it is given by However, the above expression involves divergent integrals which must be regularized. For this, we need to make a number of choices. First, we fix a canonical homology basis A i , B i , along which the Riemann surface can be cut out to obtain a domain with boundary Next, we fix simple paths C − , C j from P + to P − , P j respectively (1 ≤ j ≤ N f ), which have only P + as common point. As usual the cuts are viewed as having two edges. With these choices, we can define a single-valued branch of the Abelian integral E in Γ cut = Γ \ (C − ∪ C 1 ∪ · · · ∪ C N f ) as follows. Near P + , the function Q −1 provides a biholomorphism of a neighborhood of P + to a small disk in the complex plane. Choose the branch of log Q −1 with a cut along Q −1 (C − ), and define an integral E of dE in a neighborhood of P + in Γ cut by requiring that The Abelian integral E can then uniquely defined on Γ cut by integrating along paths. It determines in turn a coordinate system z near each of the poles P + , P − , and P j , It is easily seen that z is holomorphic around P + , and that z = 2 . The next few terms of the expansion of z in terms of Q −1 are actually quite important, but we shall evaluate them later. Similarly, we set z = e The same choices above allow us to define at the same time a single-valued branch of the Abelian integral λ in Γ cut . Specifically, λ is defined near P + by the normalization with z the above holomorphic coordinate (2.6). As before, λ is then extended to the whole of Γ cut by analytic continuation. Evidently, near P − , λ can be expressed as in the corresponding coordinate z near P − , for a suitable constant λ(P − ). Similarly, near P j , λ can be expressed as for suitable constants P j . The expression (2.4) for the prepotential F can now be given a precise meaning by regularizing as follows the divergent integrals appearing there This method of regularization has the advantage of commuting with differentiation under the integral sign with respect to connections which keep the values of z constant. (c) The derivatives of the prepotential The main properties of F are the following are Abelian differentials of the third kind with simple poles and residues +1 and -1 at P − and P i respectively, normalized to have vanishing A j -periods. We observe that the Wilson effective action of the gauge theory is insensitive to modifications of F by a k -independent terms. The equation (2.12) can be viewed as an additional criterion for selecting F , motivated by the fact that the mass parameter −m j of dλ can be viewed as a contour integral of dλ around a cycle surrounding the pole P j . In analogy with (2.4), the derivatives with respect to m j of a natural choice for F should then reproduce the integral of dλ around a dual cycle. This is the origin of the first term on the right hand side of (2.12), if we view the path from P − to P j as such a dual "cycle". The second term on the right hand side of (2.12) is a harmless correction due to regularization. The expression between parentheses is actually always a multiple of πi, although we do not need this fact. We now establish (2.11) and (2.12). We need to consider the derivatives of dλ with respect to both a k and m j . We use the connection ∇ E = ∇ of [4], which differentiates along subvarieties where the value of the Abelian integral E (equivalently the coordinate z) is kept constant. Then simply by inspecting the derivatives of the singular parts of dλ in a Laurent expansion in the z-coordinate near each pole, we find that where dω k is the basis of Abelian differentials of the first kind dual to the A k -cycles. Next, we recall from (2.2) that the residues Res P + (zdλ) and Res P − (zdλ) are constant. Consequently, However, we also have the following Riemann bilinear relations, valid even in presence of regularizations Here dΩ (2) ± are Abelian differentials of the second kind, with a double pole at P ± , vanishing A-cycles, and normalization dΩ (2) The relations (2.15) follow from the usual Riemann bilinear arguments, by considering respectively the (vanishing) integrals on the cut surface Γ cut of the 2-forms d(ω i dω k ), d(Ω j dΩ ± ). Applying (2.15) to (2.14), we obtain However, the expression is just the expansion of dλ in terms of Abelian differentials of first, second, and third kind! The equation (2.11) follows. The equation (2.12) can be established in the same way. First we write l ) (2.19) Substituting in the bilinear relations gives Again, the Abelian differentials recombine to produce dλ, and the relation (2.12) follows. III. THE RENORMALIZATION GROUP EQUATION (a) The renormalization group equation in terms of residues Combining the equations (2.4), (2.11), and (2.12) gives a first version of the renormalization group equation for F , valid in presence of arbitrary masses m j (b) The renormalization group equation in terms of invariant polynomials We can evaluate the right hand side of (3.1) explicitly, in terms of the masses m j , and the moduli parametersã k and Λ of the spectral curve (2.3). For this, we need the first three leading coefficients in the expansion of Q in terms of z at P + and P − . Now recall that at P + , Q → ∞, y = √ A 2 − B, and We consider first the terms in (3.3) of order up to O(Q N c −1 ). Then for N f ≤ 2N c − 2, only the top two terms in A contribute, while for N f = 2N c − 1, we must also incorporate the termΛ 2 where we have introduced the notatioñ This leads to the first two coefficients of z in terms of Q, or equivalently, the first two coefficients of Q in terms of z Comparing with (2.2), we see that this confirms the value of Res P + (zdλ) required there, while the condition that Res P + (dλ) = 0 is equivalent tõ Similarly, in the expansion of A + y to order O(Q N c −2 ), we must consider separately the cases N f < 2N c − 2, N f = 2N c − 2, and N f = 2N c − 1, depending on whether the terms B/A and B 2 /A 3 contribute to this order. Taking into account (3.4), we find Near P − , we have instead Again, considering separately the cases we can derive the leading three terms of the expansion of in terms of Q. Written in terms of an expansion of Q in terms of z, the result is with S − 2 given by Subsituting in the values of Res P + (zdλ) and Res P − (zdλ) given in (2.2), and rewriting the result in terms ofs 2 and the operator D of (1.2), we can rewrite the renormalization group equation (3.1) as Before proceeding further, we would like to note a few features of the renormalization group equation and of our choice of prepotential. (1) The RG equations (3.1) and (3.10) are actually invariant under a change of cuts. Indeed, a change of cuts would shift the values of the regularized integrals (1.4) by a linear expression, and hence F by a quadratic expression in the masses m j , independent of the a k . In view of Euler's relation, such terms cancel in the left hand side of (3.1) and (3.10). Thus the right hand side of the RG only transforms under a change of homology basis, and is a modular form; (2) From the point of view of gauge theories alone, we can in practice ignore on the right hand side of (3.1) and (3.10) terms which do not depend on the a k . Such terms can always be cancelled by a suitable a k -independent correction to F . These corrections do not affect the Wilson effective action since it depends only on the derivatives of F with respect to a k ; (3) Some caution may be necessary in interpretings 2 , in terms of the classical order parametersā k . In particular, when N f ≥ N c , there are several natural ways of parametrizing the curve (2.3), which theã k get shifted in different ways toã k = a k [3] [4]. As noted in [6], the prepotential F is independent of such redefinitions of thē a k . However, this would of course not be the case fors 2 ≡ N c k<jā kāj , which argues for a distinct interpretation fors 2 = j<kã kãj . (c) The renormalization group equation in terms of ϑ-functions As noted above, the right hand side of the RG equation (3.1) is in general a modular form. For N f = 0 (and arbitrary N c ), we can exploit the symmetry between the branch points x ± k given by and known formulae for their cross-ratios to write it explicitly in terms of ϑ-functions. More precisely, we observe that Let the canonical homology basis be given by A k cycles surrounding the cut from x − k to x + k , 1 ≤ k ≤ N c − 1 on one sheet, and by B k cycles going from x − N c to x − k on one sheet, and coming back from x − k to x − N c on the opposite sheet. Then for the dual basis of Abelian differentials dω = (dω k ) k=1,···,N c −1 , we introduce the basis vectors e (k) and τ (k) of the Jacobian lattice by We have then the following relations between points in the Jacobian lattice If we choose Q 0 so that φ(x − 1 ) = 1 2 τ (1) , it follows from (3.12) that If we introduce the functions F k l (Q) by 3.14) an inspection of the zeroes shows that we have the following relation between F k l and cross-ratios 3.15) For the Riemann surface (2.2), we also have for all Q Combining with products of expressions of the form (3.15) evaluated at branch points, we can actually identify the branch points where G k is defined to be is independent of k, this expression may be simplified, The evaluation of the functions F k l on the branch points is particularly simple, and we have where the values of φ(x ± ) can be read off from (3.13). This leads to the following expression for the right hand side of (3.10) : which is a modular form. IV. THE WEAK-COUPLING LIMIT It is instructive to verify the renormalization group equation (3.10) in the weakcoupling limit analyzed in [6] to 2-instanton order. We recall the expression obtained in [6] for the prepotential F to two-instanton order in the regime of Λ → 0. Let the functions S ( x) and S k (x) be defined by Then the prepotential F is given by with the terms F (0) , F (1) , F (2) corresponding respectively to the one-loop perturbative contribution, the 1-instanton contribution, and the 2-instanton contribution Here we have ignored quadratic terms in a k , since they are automatically annihilated by the operator D. We also note that the arguments of [6] only determine F up to a kindependent terms, and thus we shall drop all such terms in the subsequent considerations. The formulae (4.2) imply where allΛ 6 terms have been ignored. On the other hand, up to a k -independent terms, the renormalization group equation (3.10) To compare (4.3) with (4.4) we need first to evaluate N c k=1ã 2 k in terms of the renormalized order parameters a k . Using the formula (3.11) of [6], this can be done routinely where we have set∂ k = ∂/∂ã k , and defined functionsS(x),S k (x) in analogy with (4.1), but with a k replaced byã k . Invertingã k in terms of a k , and rewriting the result in terms of the derivatives ∂ k = ∂/∂a k with respect to the renormalized parameters a k , we find and hence N c Next, we need a number of identities which can be established by contour integrals, in analogy with the identities in Appendix B of [6] N c Using (4.9) we can indeed recast S k (a k )∂ 2 k S k (a k ) (4.10) The equality of the two right hand sides in (4.3) and (4.4) follows.
2014-10-01T00:00:00.000Z
1996-10-20T00:00:00.000
{ "year": 1996, "sha1": "3aa687cedad1de7d64369fddd1ec03f501260fe5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/9610156", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3aa687cedad1de7d64369fddd1ec03f501260fe5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
120891788
pes2o/s2orc
v3-fos-license
New Partition Theoretic Interpretations of Rogers-Ramanujan Identities A. K. Agarwal and M. Goyal Center for Advanced Study in Mathematics, Panjab University, Chandigarh 160014, India Correspondence should be addressed to A. K. Agarwal, aka@pu.ac.in Received 13 January 2012; Revised 4 March 2012; Accepted 5 March 2012 Academic Editor: Toufik Mansour Copyright q 2012 A. K. Agarwal and M. Goyal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The generating function for a restricted partition function is derived. This in conjunction with two identities of Rogers provides new partition theoretic interpretations of Rogers-Ramanujan identities. Introduction, Definitions, and the Main Results The following two "sum-product" identities are known as Rogers-Ramanujan identities for any constant a. International Journal of Combinatorics If n is a positive integer, then obviously a; q n 1 − a 1 − aq · · · 1 − aq n−1 , They were first discovered by Rogers 1 and rediscovered by Ramanujan in 1913. MacMahon 2 gave the following partition theoretic interpretations of 1.1 , respectively. Gordon 3 , and Andrews 4 gave the analytic counterpart of Gordon's generalization. Partition theoretic interpretations of many more qseries identities like 1.1 have been given by several mathematicians. See, for instance, Göllnitz 5, 6 , Gordon 7 , Connor 8 , Hirschhorn 9 , Agarwal and Andrews 10 , Subbarao 11 , Subbarao and Agarwal 12 . Our objective in this paper is to provide new partition theoretic interpretations of identities 1.1 which will extend Theorems 1.1 and 1.2 to 3-way partition identities. In our next section, we will prove the following result. 1.5 [1, p.330] and [13, p.331] (see also Slater [14,Identities (20) and (16) 1.6 extends Theorems 1.1 and 1.2 to the following 3-way partition identities, respectively. Proof of the Theorem 1.3 Let A k m, n denote the number of partitions of n enumerated by A k n into m parts. We shall first prove that To prove the identity 2.1 , we split the partitions enumerated by A k m, n into two classes: i those that have least part equal to k, ii those that have least part greater than k. We now transform the partitions in class i by deleting the least part k and then subtracting 2 from all the remaining parts. This produces a partition of n − k − 2 m − 1 into exactly m−1 parts, each of which is ≥k since originally the second smallest part was ≥k 2 ; furthermore, since this transformation does not disturb the inequalities between the parts, we see that the transformed partition is of the type enumerated by A k m − 1, n − k − 2m 2 . Next, we transform the partitions in class ii by subtracting 4 from each part. This produces a partition of n − 4m into m parts, each of which is ≥k, as in the first case; here too, the inequalities between the parts are not disturbed, we see that the transformed partition is of the type enumerated by A k m, n − 4m . The above transformations establish a bijection between the partitions enumerated by A k m, n and those enumerated by A k m − 1, n − k − 2m 2 A k m, n − 4m . This proves the identity 2. 2.5 This completes the proof of Theorem 1.3. Conclusion In this paper MacMahon's Theorems 1.1 and 1.2 have been extended to 3-way identities. The most obvious question which arises from this work is the following: Does Gordon's generalization of Theorems 1.1 and 1.2 also admit similar extension? We must add that different partition theoretic interpretations of identities 1.6 are found in the literature see for instance 15, 16 .
2017-07-28T10:51:24.667Z
2012-05-13T00:00:00.000
{ "year": 2012, "sha1": "821c606723cdf6368ff8647edfb6185ad0d42574", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2012/409505.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cd86912539c95e7f5d343a52e56d2386a1350a2c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
260807142
pes2o/s2orc
v3-fos-license
Association of HCG Level with Ultrasound Visualization of the Gestational Sac in Early Viable Pregnancies Our primary objective is to verify or refute a 2013 study by Connolly et al. which showed that in early pregnancy, a gestational sac was visualized 99% of the time on transvaginal ultrasound when the HCG level reached 3510 mIU/mL. Our secondary objective was to make clinical correlations by assessing the relationship between human chorionic gonadotropin (HCG) level in early pregnancy when a gestational sac is not seen and pregnancy outcomes of live birth, spontaneous abortion, and ectopic pregnancy. This retrospective study includes 144 pregnancies with an outcome of live birth, 87 pregnancies with an outcome of spontaneous abortion, and 59 ectopic pregnancies. Logistic regression is used to determine the probability of visualizing a gestational sac and/or yolk sac based on the HCG level. A gestational sac is predicted to be visualized 50% of the time at an HCG level of 979 mIU/mL, 90% at 2421 mIU/mL, and 99% of the time at 3994 mIU/mL. A yolk sac was predicted to be visualized 50% of the time at an HCG level of 4626 mIU/mL, 90% at 12,892 mIU/mL, and 99% at 39,454 mIU/mL. A total of 90% of ectopic pregnancies presented with an HCG level below 3994 mIU/mL. These results are in agreement with the study by Connolly et al. Since most early ectopic pregnancies had an HCG value below the discriminatory level for gestational sac visualization, other methods for the evaluation of pregnancy of unknown location such as repeat HCG values are clinically important. Introduction Understanding the relationship between early pregnancy, gestational sac development, and human chorionic gonadotropin (HCG) rise is essential for managing an early pregnancy and differentiating between normal pregnancy, spontaneous abortion, and ectopic pregnancy.The concept of an HCG discriminatory cutoff above which a gestational sac should be seen on ultrasound was first mentioned in 1981 by Kadar et al. and was based on transabdominal ultrasound imaging [1].In 2013, Connolly et al. published the most commonly referenced modern study on discriminatory levels of serum HCG in early pregnancy assessed by transvaginal ultrasound (TVUS) [2].They studied women who presented to an emergency department between 2007 and 2009 with pain or bleeding in early pregnancy and went on to have a viable pregnancy.They concluded that the gestational sac can be seen 1% of the time on TVUS when the HCG is 390 mIU/mL and 99% of the time when the HCG is 3510 mIU/mL.The discriminatory value of 3510 mIU/mL was higher than those found by previous studies and suggested a transition zone where, with increasing HCG, the likelihood of visualizing the gestational sac increased [3][4][5][6][7].For this reason, when a pregnancy of unknown location is diagnosed (no gestational sac seen in the uterus on transvaginal ultrasound), the risk of ectopic pregnancy increases with increasing HCG values.Despite being nearly a decade old, this data appears to have never been verified. The Connolly et al. study analysis has several limitations that limit its utility in subsequent research and clinical practice.First, there is no discussion of the assessment of the logit assumption or the evaluation of the goodness of fit of the logistic regression model.Second, the discussion of fractional polynomials (which found HCG 0.5 to be the best-fit model for gestational sac and the linear model to be the best fit for yolk sac) is very limited.Third, their study only included viable pregnancies and did not evaluate HCG levels at presentation for patients with no gestational sac on ultrasound who went on to have spontaneous abortions or ectopic pregnancies.Lastly, the raw data is not available which limits the ability of others to combine data from multiple studies to form larger datasets for analysis. Our primary objective was to independently verify or refute the findings of Connolly et al. regarding the HCG values and the probability of visualizing a gestational sac or yolk sac in early viable pregnancies.Our secondary objective was to make clinical correlations by assessing the relationship between HCG in early pregnancy when a gestational sac is not seen and the outcomes of live birth, spontaneous abortion, and ectopic pregnancy. Study Population Patients were identified retrospectively through labor and delivery and ob/gyn triage records at Los Angeles General Medical Center (formerly Los Angeles County Medical Center) for patients presenting from May 2016 to December 2020.This county hospital uses a single medical record system in the outpatient, inpatient, and emergency room settings.A viable pregnancy was defined as either a live birth, a normal pregnancy at greater than 20 weeks gestational age, or fetal heart tones documented at follow-up.A spontaneous abortion was defined as a confirmed abortion or no fetal heart tones documented at follow-up.Finally, ectopic pregnancy was defined as either a confirmed or presumed ectopic pregnancy.At this center, suction dilation and curettage is routinely performed to rule out intrauterine pregnancy prior to medical treatment of ectopic pregnancy. Patients meeting inclusion criteria were between the ages of 12 and 55 years old and must have had either a TVUS and HCG test within 12 h of each other or a TVUS between two HCG tests less than 72 h apart.For patients with a TVUS performed between two HCG tests, the HCG at the time of the TVUS was determined by linear interpolation.Exclusion criteria included molar pregnancies, unknown pregnancy outcomes, multiple gestations, only transabdominal ultrasound results, uterus visualization obstructed by large fibroids, prior medical or surgical abortion treatment earlier in the pregnancy, HCG greater than 25,000, and fetal heart tones documented on TVUS.A total of 290 patients met the inclusion criteria, as shown in Fig. 1.The mean age was 30.0 years (SD 7.1 years), and the mean BMI was 29.8 kg/m 2 (SD 7.0 kg/m 2 ).The study population was 72% Hispanic, 8% African American, 4% Asian, 1% Caucasian, 8% other, and 7% unknown.A total of 89% of the transvaginal ultrasounds were performed by a sonographer in the radiology department, 10% by a supervised ob/gyn resident, 1% by a fellow, and 1% by an attending. HCG was measured using the Roche Diagnostics cobas e 801 analyzer using the Elecsys HCG+β assay.This assay has intraassay and interassay coefficients of variation of under 5%.Serum HCG values are reported standardized against the 4th International Standard for Chorionic Gonadotropin from the National Institute for Biological Standards and Control (NIBSC) code 75/589.This assay measures the sum of HCG plus the HCG β-subunit in serum or plasma including the whole hormone, nicked HCG, the β-core fragment, and the free β-subunit. Of patients who presented more than once, only data from the initial ultrasound was used in the study.The initial data included was reviewed for outliers, and approximately 15% of the initial data was determined to be possible outliers.These charts were reviewed again to verify if the inclusion criteria were met.This study was approved by the University of Southern California IRB (HS-19-00829). Statistical Analysis For pregnancies resulting in a live birth, logistic regression was used to model the probability that a gestational sac or yolk sac would be visualized as a function of HCG level (Stata version 16.1, StataCorp, College Station, TX).We used fractional polynomials to determine how best to model the relationship between HCG and visualization of the gestational sac and yolk sac.We evaluated the linearity assumption (the assumption of a linear relationship between HCG and the logit) by constructing LOWESS plots.This was done separately for visualization of the gestational sac and for visualization of the yolk sac. The linear model was the best model for visualization of the gestational sac.The linearity assumption was valid only when HCG was less than 5000 mIU/mL, and we therefore restricted our analysis to this range.Connolly et al. used HCG 0.5 in their model.While we did find that HCG 0.5 was the best fit 1° model for the prediction of the gestational sac, it was not significantly better than the linear model (p = 0.23).For this reason, we used the simpler linear model shown below for predicting the probability (p) of visualizing the gestational sac. The natural logarithm transformation of HCG provided the best-fit model for visualization of the yolk sac.The natural logarithm of HCG was linearly related to the logit for all HCG values up to 25,000 mIU/mL.Connolly et al. used the linear model for predicting the probability of visualization of the yolk sac.We found that ln(HCG) was significantly better than the linear model (p < 0.01).For this reason, we use the natural logarithm of HCG in the model for predicting the probability (p) of visualizing the yolk sac. In these equations, p is the probability of seeing a gestational sac or yolk sac, a is the constant term, and b is the coefficient of HCG or ln(HCG).The Hosmer and Lemeshow overall goodness of fit tests did not show evidence of poor fit for predicting the presence of a gestational sac (p = 0.39) or a yolk sac (p = 0.48) [8].We are providing access to all of the data and detailed statistical methods used to perform and evaluate the logistic regression modeling in Stata through Mendeley Data [9]. Results Out of 4451 records reviewed, 290 met all criteria for the study, as shown in Fig. 1.Of the included pregnancies, 144 resulted in live birth, 87 in spontaneous abortion, and 59 in ectopic pregnancy.Twenty of the spontaneous abortions presented as pregnancies of unknown location with no visible gestational sac on the initial transvaginal ultrasound. Of pregnancies that resulted in live births, a gestational sac was predicted to be visualized 50% of the time at an HCG level of 979 mIU/mL, 90% at 2421 mIU/mL, 95% at 2911, and 99% of the time at 3994 mIU/mL.A yolk sac was predicted to be visualized 50% of the time at an HCG level of 4626 mIU/mL, 90% at 12,892 mIU/mL, 95% at 18,268, and 99% at 39,454 mIU/mL (Fig. 2).These values are shown in comparison to those reported by Connolly et al. in Table 1, and a graphical representation of their corresponding logistic regressions is shown in Fig. 2. The best-fit logistic regression models are shown below.Probability (p) of visualization of gestational sac: Probability (p) of visualization of yolk sac: HCG values for early pregnancies and their corresponding outcomes are shown in Fig. 3.In Fig. 3, the three groups on the left would all be considered to be a pregnancy of unknown location [10].Only 6 of 59 (10%) ectopic pregnancies had a HCG above the 99% threshold for detection of the gestational sac of 3994 mIU/mL.Presenting symptoms for the pregnancies of unknown location in Fig. 3 are shown in Table 2. Discussion The HCG values at which the gestational sac is predicted to be seen based on this study are similar to those reported by Connolly et al., with the discriminatory level for visualization of the gestational sac being slightly higher (3994 mIU/mL here compared to 3510 mIU/mL).The highest reported HCG value with no gestational sac seen on transvaginal ultrasound is 9083 mIU/mL for a patient with a triplet pregnancy and 4336 mIU/mL for a singleton pregnancy [11,12].The HCG values for visualizing a yolk sac in this study are higher than those found in the Connolly study.The HCG level for predicting visualization of a yolk sac 99% of the time of 39,454 mIU/mL is within the confidence interval found in the Connolly et al. study.Even though the Connolly et al. study has a larger sample size, the confidence interval reported in their study is still wide since the sample size is relatively small.One of the biggest limitations of this study is the sample size.Our sample size was limited as we were not able to review data from before 2016. Even in the modern era of medicine where electronic medical records are the norm, compiling data on pregnancies with known outcomes and corresponding ultrasound and HCG values in early pregnancy is a laborious task.The initial study by Kadar et al. in 1981 examined the records of 53 patients.The Connolly et al. study had a sample size of 366 patients who presented with pain or bleeding and went on to have a viable pregnancy.In this study, we examined records of patients who had a live delivery and retrospectively collected data on those with recorded early TVUS and HCG.We also reviewed the gynecology triage records of patients presenting for evaluation of early pregnancies to obtain data on early ectopic pregnancies and spontaneous abortion.This allowed for in-depth clinical correlation which can help evaluate pregnancies of unknown location. Figure 3 includes the HCG values for pregnancies with no gestational sac on initial TVUS that went on to become a spontaneous abortion.It is likely that some of these pregnancies have high HCG values with no intrauterine gestational sac because the gestational sac had already passed.These could be clinically confused with ectopic pregnancy because some of these pregnancies presented with HCG above the 99% discriminatory level of 3994 mIU/mL.Serial HCG measurement is helpful for detecting spontaneous abortion with a high presenting HCG level because the level typically decreases at a mean rate of 70-75% over 2 days [13]. As seen in Fig. 3, most of the ectopic pregnancies presented with an HCG value under 2000 mIU/mL.We found that 90% of ectopic pregnancies had an HCG at presentation that was less than the value at which 99% of viable pregnancies can be detected by visualizing a gestational sac on transvaginal ultrasound (3994 mIU/mL in this study).Few of the ectopic pregnancies included would have qualified for immediate intervention (medical or surgical treatment for ectopic pregnancy) based on HCG above the discriminatory level at the time of presentation.Table 2 shows that there is a significant symptom overlap between pregnancies of unknown location regardless of the eventual pregnancy outcome (SAB, ectopic, or viable pregnancy).More emphasis should be placed on repeat HCG values for early detection of ectopic pregnancy in addition to assessing clinical presentation.The commonly accepted practice of assessing pregnancy of unknown location by repeating the HCG after 2 days dates back to a 1981 study by Kadar et al. [14].The 48-h sampling interval was recommended because "after 1 day, the difference between the mean percent hCG increase of intrauterine and ectopic pregnancies (20%) is less than twice the interassay variability."The interassay and intraassay coefficients of variation cited in the 1981 Kadar et al. study is "less than 15%."Since modern-day assays have coefficients of variation of 5% or less, repeating the HCG after 24 h is appropriate in modern practice.This would allow for more rapid management and could decrease the risk of a ruptured ectopic pregnancy. This study is accompanied by the dataset used for the logistic regression as well as all of the Stata code to completely recreate the statistical analysis [9].This allows other investigators to use our methods with their datasets or to combine datasets from multiple studies. In conclusion, this study is in agreement with the 2013 study by Connolly et al. since the logistic regression model for this data predicts that 99% of early viable singleton pregnancies will have a visible gestational sac on transvaginal ultrasound when the HCG level reaches 3994 mIU/mL.This limits the utility of a discriminatory level to detect ectopic pregnancies.Since only 10% of ectopic pregnancies included in this study had an HCG value above 3994 mIU/mL, the discriminatory level concept is not very useful in detecting ectopic pregnancies in modern practice.We feel that rapid repeat HCG measurement (such as repeating after only 24 h) is an underutilized strategy for evaluating early pregnancies and is appropriate based on modern HCG assays that are much more precise than they were in the 1980s. ln p 1 Fig. 1 Fig. 1 Flow diagram with inclusion and exclusion criteria Fig. 2 Fig. 2 Predicted probability of detecting a gestational sac (A) and yolk sac (B) based on HCG in early viable pregnancy Fig. 3 Fig. 3 Comparison of HCG values and early pregnancy findings at presentation categorized by eventual pregnancy outcome.The first 3 groups on the left would be considered pregnancies of unknown location.67 spontaneous abortions had gestational sacs and are not Table 1 Comparison of serum HCG levels and predicted probability of detection of gestational sac and yolk sac for Connolly et al. (n = 366) and the current study at Los Angeles General Medical Center (n Table 2 Presenting signs of pregnancy of unknown location and pregnancy outcome
2023-08-12T06:17:38.356Z
2023-08-10T00:00:00.000
{ "year": 2023, "sha1": "3a08687ea7a21ac385b7cc7cf5be095f8d70049c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s43032-023-01308-7.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "cd78dc77f474e5e04b84b3cc0515f6da819906d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
31614635
pes2o/s2orc
v3-fos-license
Ab-initio angle and energy resolved photoelectron spectroscopy with time-dependent density-functional theory We present a time-dependent density-functional method able to describe the photoelectron spectrum of atoms and molecules when excited by laser pulses. This computationally feasible scheme is based on a geometrical partitioning that efficiently gives access to photoelectron spectroscopy in time-dependent density-functional calculations. By using a geometrical approach, we provide a simple description of momentum-resolved photoe- mission including multi-photon effects. The approach is validated by comparison with results in the literature and exact calculations. Furthermore, we present numerical photoelectron angular distributions for randomly oriented nitrogen molecules in a short near infrared intense laser pulse and helium-(I) angular spectra for aligned carbon monoxide and benzene. I. INTRODUCTION Photoelectron spectroscopy is a widely used technique to analyze the electronic structure of complex systems. 1,2 The advent of intense ultra-short laser sources has extended the range of applicability of this technique to a vast variety of non-linear phenomena like highharmonic generation, above-threshold ionization (ATI), bond softening and vibrational population trapping. 3 Furthermore, it turned attosecond time-resolved pumpprobe photoelectron spectroscopy into a powerful technique for the characterization of excited-states dynamics in nano-structures and biological systems. 4 Angularresolved ultraviolet photoelectron spectroscopy is by now established as a powerful technique for studying geometrical and electronic properties of organic thin films. 5,6 Time-resolved information from streaking spectrograms, 7 shearing interferograms, 8 photoelectron diffraction, 9 photoelectron holography, 10 etc. hold the promise of wavefunction reconstruction together with the ability to follow the ultrafast dynamics of electronic wave-packets. Clearly, to complement all these experimental advances, and to help to interpret and understand the wealth of new data, there is the need for ab-initio theories able to provide (time-resolved) photoelectron spectra (PES) and photoelectron angular distributions (PAD) for increasingly complex atomic and molecular systems subject to arbitrary perturbations (laser intensity and shape). a) Electronic mail: umberto.degiovannini@ehu.es Photoelectron spectroscopy is a general term which refers to all experimental techniques based on the photoelectric effect. In photoemission experiments a light beam is focused on a sample, transferring energy to the electrons. For low light intensities an electron can absorb a single photon and escape from the sample with a maximum kinetic energy ω − I P (where ω is the photon angular frequency and I P the first ionization potential of the system) while for high intensities electron dynamics can be interpreted considering a three-step model. 11 This model provides a semiclassical picture in terms of ionization followed by free electron propagation in the laser field with return to the parent ion, and rescattering. Such rescattering processes are the source of many interesting physical phenomena. In the case of long pulses, for instance, multiple photons can be absorbed resulting in emerging kinetic energies of s ω − I P − U P (where s is the number of photons absorbed, U P = 2 /4ω 2 is the ponderomotive energy and the electric field amplitude) forming the so called ATI peaks in the resulting photoelectron spectrum. In all cases the observable is the escaping electron momentum measured at the detector. In general, the interaction between electrons in an atom or molecule and a laser field is difficult to treat theoretically, and several approximations are usually performed. Clearly, a full many-body description of PES is prohibitive, except for the case of few (one or two) electron systems. [12][13][14] As a consequence, the direct solution of the time dependent Schrödinger equation (TDSE) in the so-called single-active electron (SAE) approximation is a standard investigation tool for many strong-field effects in atoms and dimers and represents the benchmark arXiv:1206.6031v1 [physics.atom-ph] 25 Jun 2012 for analytic and semi-analytic models. 7,10,[15][16][17][18][19][20][21][22][23][24][25][26] Perturbative approaches based on the standard Fermi golden rule are usually employed. For weak lasers, plane wave methods 5 and the independent atomic center approximation 27 have been applied, while in the strong field regime, Floquet theory, the strong-field approximation 10,28 and semiclassical methods 11,29,30 are routinely used. From a numerical point of view, it would be highly desirable to have a PES theory based on time-dependent density functional theory (TDDFT) 31,32 where the complex many-body problem is described in terms of a fictitious single-electron system. For a given initial many body state, TDDFT maps the whole many-body problem into the time dependence of the density from which all physical properties can be obtained. The method is in principle exact, but in practice approximations have to be made for the unknown exchange-correlation functional as well as for specific density-functionals providing physical observables. This latter issue is much less studied than the former, and to the best of our knowledge a formal derivation of momentum-resolved PES from the time dependent density has not been performed up to now. In any case, several works were published addressing the problem of single and multiple ionization processes within TDDFT. For example, ionization rates were calculated for atoms and molecules, [33][34][35][36][37] and TDDFT with the sampling point method (SPM) has been employed in the study of PES and PAD for sodium clusters. [38][39][40][41] In this work, besides presenting a formal derivation of a photoelectron orbital functional, we report on a new and physically sound scheme to compute PES of interacting electronic systems in terms of the time-dependent single electron Kohn-Sham (KS) wavefunctions. The scheme relies on geometrical considerations and is based on a splitting technique. [16][17][18][19][20] The idea is based on the of partitioning of space in two regions (see Fig. 1 below): In the inner region, the KS wave function is obtained by solving the TDDFT equations numerically; in the outer region, electrons are considered as free particles, the Coulomb interaction is neglected, and the wavefunction is propagated analytically with only the laser field. Electrons flowing from the inner region to the outer region are recorded and coherently summed up to give the final result. In addition to the adaptation of the traditional splitting procedure to TDDFT, we propose a novel scheme where electrons can seamlessly drift from one region to the other and spurious reflections are greatly suppressed. This procedure allows us to reduce considerably the spatial extent of the simulation box without damaging the accuracy of the method. The rest of this Article is organized as follows. The formalism for describing photoelectrons in TDDFT is delineated in Sect. II. In order to make contact with the literature, we first give a brief introduction to the stateof-the-art for the ab-initio calculation of PES for atomic and molecular systems. In Sect. II A we introduce the geometrical approach in the context of quantum phasespace. The phase-space approach is then derived in the case of effective single-particle theories like TDDFT in Sect. II B. In Sect. II C we introduce the mask method, an efficient propagation scheme based on space partitioning. Three applications of the mask method are presented in Sect. III. One application deals with the hydrogen atom and illustrates the different mask methods in a simple one-dimensional model also in comparison with the sampling point method. 38 The above threshold ionization of three-dimensional hydrogen is examined and compared with values from the literature. In the second application we illustrate PADs from randomly oriented nitrogen molecules in a strong near-infrared ultra-short laser pulse. Comparison with the experiment and molecular strong-field approximation is discussed. 28 The third application of the method regards helium-(I) (wavelength 58 nm) PADs for oriented carbon monoxide and benzene. Results are discussed in comparison with the plane wave approximation. Finally, in Sect. IV we discuss the results and present the conclusions. All our numerical calculations were performed with the real-time, real-space TDDFT code Octopus, 42,43 freely available under the GNU public license. Atomic units are used throughout unless otherwise indicated. II. MODELING PHOTOELECTRON SPECTRA In order to put in perspective the results of the present Article, we will give a brief introduction on the status of the principal techniques available for ab-initio PES calculations. We start our description with the methods employed to study one-electron systems. For one-electron systems PES can be calculated exactly from the direct solution of the TDSE. Several methods have been employed to extract PES information from the solution of the TDSE. The most direct and intuitive way is via direct projection methods where the PES is obtained by projecting the wave function at the end of the pulse onto the eigenstates describing the continuum. These eigenstates are extracted through the direct diagonalization of the Hamiltonian without including the interaction with the field. The momentum probability distribution can then be easily obtained from the Fourier transform of the continuum part of the time-dependent wavefunction. 23 Another approach, that avoids the calculation of the full continuum spectrum, involves the analysis of the exact wavefunction |Ψ after the laser pulse via a resolvent technique. 15,26 In this case, the energy resolved PES is given by the direct projection on out-going wavefunctions with P (E) = | Φ(E)|Ψ | 2 = Ψ|D(E)|Ψ , where Φ(E) denotes an out-going (unbound) electron of energy E of the laser-free Hamiltonian, andD(E) is the corresponding projection operator that can be conveniently approximated. 15,26 Normally, one needs accurate wave functions in a large space domain to obtain the correct distribution of the ejected electrons. This is because the unbound parts of the wave packet spread out of the core region, and conventional expressions for the transition amplitude need these parts of the wave function. Solving the TDSE within all the required volume in space can easily become a very difficult computational problem. Several techniques were developed during the years to solve the problem. For simple cases these difficulties can be overcome by the use of spherical coordinates. Geometrical splitting techniques have also been employed. [16][17][18][19][20] Furthermore, formulations in the Kramers-Henneberger frame of reference 44 and in momentum-space 24 led to calculations with remarkable high precision. Recently a promising surface flux method has also been proposed. 25 The exact solution of the TDSE in three dimensions for more than two electrons is unfeasible and the limit rises to four electrons for one-dimensional models. 45 Due to this limitation basically all ab-initio calculations for multi-electron systems are preformed under the SAE approximation. In the SAE only one electron interacts with the external field while the other electrons are frozen, 21 and the TDSE is thus solved only for the active electron. This approximation has been successfully employed in several photoemission studies for atoms and molecules in strong laser fields. 7,10,22 However, the failure of this simple model to describe multi-electron (correlation) effects calls for better schemes. 22 The inclusion of exchange-correlation effects for a system of many interacting electrons can be achieved within TDDFT while keeping the simplicity of working with a set of time-dependent (fictitious) single-particle orbitals. In spite of transferring all the many-body problem into an unknown exchange-correlation functional, the lack of a density functional providing the electron emission probability is a major limitation for a direct access to photoelectron observables from the time evolution of the density (note that, in spite of the Runge-Gross theorem 31 stating that all observables are functionals of the time dependent density, in practice we know few observables that can be written in terms of the time-dependent density, one example being the absorption spectra). There has been some attempts to describe PES and multiple ionization processes with TDDFT in the standard adiabatic approximation. [33][34][35][36][37] All these works use boundary absorbers to separate the bound and continuum part of the many-body wavefunction. The emission probability is then correlated with the time dependence of the number of bound electrons. An alternative and simple scheme is provided by the SPM. 38 Here the idea is to record single-particle wavefunctions in time at a fixed sampling point r S away from the core. The time Fourier transform of the wavefunction recorded at r S represents the probability of having an electron in r S with energy E. The probability to detect one electron with energy E in r S is then given by the sum over all occupied orbitals: This method is easy to implement, can be extended to give also angular information, 41 and is also clearly applicable to the TDSE in the SAE. However, it lacks formal derivation as it is directly based on Kohn-Sham wavefunctions without a direct connection to the many-body state. Furthermore, it is strongly dependent on the position of the sampling point and the minimum distance. This distance sometimes turns out to be quite large in order to avoid artifacts, and is strongly dependent on the laser pulse properties. We discuss further details concerning this method in Sect. III A. In the following we present an alternative method inspired by geometrical splitting and derive it from a phasespace point of view. The method can be naturally converged by increasing the size of the different simulation boxes. A. Phase-space geometrical interpretation An intuitive description of photoelectron experiments can be obtained resorting to a phase-space picture. Experimental detectors are able to measure photoelectron velocity with a certain angular distribution for a sequence of ionization processes with similar initial conditions. The quantity available at the detector is therefore connected to the probability to register an electron with a given momentum p at a certain position r. From this consideration it would be tempting to interpret photoemission experiments with a joint probability distribution in the phase-space (r, p). Such a classical picture however conflicts with the fundamental quantum mechanics notion of the impossibility to simultaneously measure momentum and position, and prevents us from proceeding in this direction. A link between the classical and quantum picture is needed beforehand. In order to make a connection to a microscopic description it turns out to be convenient to extend the classical concept of phase-space distributions to the quantum realm. A common prescription comes from the Wigner transform of the one-body density matrix with respect to the center of mass R = (r + r )/2 and relative s = r − r coordinates. The d-dimensional (here and after d ≤ 3) transform is defined as with ρ(r, r , t) = dr 2 . . . dr N Ψ(r, r 2 , . . . , r N , t) being the one-body density matrix, and Ψ(r 1 , r 2 , . . . , r N , t) the N -body wavefunction of the system at time t. The Wigner function defined above is normalized and its integral over the whole space (momentum) gives the probability to find an electron with momentum p (position R). As the uncertainty principle prevents the simultaneous knowledge of position and momentum, w(R, p) cannot be a proper joint distribution. Moreover it can assume negative values due to nonclassical dynamics. Nevertheless the Wigner function w(R, p) constitutes a concept close to a probability distribution in phase space (R, p) compatible with quantum mechanics. The quantum phase-space naturally leads to a geometrical interpretation of photoemission. One could think to divide the space in two regions A and B as in Fig. 1 (a), where region B represents the region where detectors are positioned and A is defined as the complement of B. In this picture, PES can be seen as the probability to have an electron with given momentum in B. It is then natural to define the momentum-resolved photoelectron spectrum as where the spatial integration is carried out in region B, and the limit t → ∞ assures that region B contains all photoelectron contributions. From the knowledge of the momentum-resolved PES [cf. Eq. (4)] one can access several different quantities by simple integration. For instance, in three dimensions (d = 3) the energy-resolved PES is obtained integrating over the solid angle Ω: and the photoelectron angular distribution in the system reference frame is given by, In spite of giving an intuitive picture of PES, Eq. (4) is not suited for direct numerical evaluation since it requires the knowledge of the full one-body density matrix in the whole space. In the next section we will make a contact with effective single particle theories like TDDFT to overcome the limitations due to the knowledge of the many-body wavefunction. In order to avoid integration over the whole space an efficient evolution scheme is presented in Sect. II C. B. Phase space interpretation within TDDFT TDDFT is an effective single particle theory where the many-body wavefunction is described by an auxiliary single Slater determinant Ψ KS (r 1 , . . . , r N ) built out of Kohn-Sham orbitals ψ i (r). 31,32 In order to simplify the notation, we drop the explicit time dependence from the wavefunctions and assume that the following equations are written in the limit t → ∞ as prescribed by Eq. (4). Being represented by a single determinant, the onebody Kohn-Sham density matrix is given by where the sum in carried out over all occupied orbitals. Performing a decomposition of each orbital according to the partition of Fig. 1 (a) we obtain where Ψ A,i (r) is the part of the wavefunction describing states localized in A and Ψ B,i (r) is the ionized contribution measured at the detector in B. The one-body density matrix can now be accordingly decomposed as a sum of four terms From Eq. (9) we can build the KS Wigner function defined in Eq. (2) and obtain the momentum-resolved probability distribution by inserting it into Eq. (4). We note that this step involves a non-trivial approximation, namely that the KS one-body density matrix is a good approximation to the fully interacting one in region B. This is, however, much milder than the assumption that the Kohn-Sham determinant is a good approximation to the many-body wavefunction in region B, as it is done, e.g., in the SPM. The final result is a sum of four overlap double integrals that can be simplified further. For a detailed calculation we refer to Appendix A. The first overlap integral, containing a product of two functions localized in A [cf. Eq. (9)], is zero due to the spatial integration in B. The two next overlap integrals, containing mixed products of wavefunctions localized in A and B, can be reduced by increasing the size of region A. Assuming A to be large enough to render these terms negligible the only integral we are left with is the one containing functions in B, leading to The approximation sign ≈ is a reminder for the error committed in discarding the mixed overlap integrals. Since the probability of finding an ionized electron in region A is zero for t → ∞, we can extend the integration over B in Eq. (10) to the whole space. Using the integral properties of the Wigner transform we finally obtain whereψ B,i (p) is the Fourier transform of ψ B,i (r) and the expression is written in the limit for t → ∞. Equation (11) gives an intuitive formulation of momentumresolved PES as a sum of the Fourier component of each orbital in the detector region. It is worth to note that Eq. (11) is not restricted to TDDFT and can be applied to other effective single-particle formulations such as time-dependent Hartree-Fock and the TDSE in the SAE approximation. The numerical evaluation of the ionization probability from Eq. (11) requires the knowledge of the wavefunction after the external field has been switched off. For ionization processes this means that one has to deal with simulation boxes that extend over several hundred atomic units and this practically constrains the method only to one-dimensional calculations. In the next section we will derive a simple scheme to overcome this limitation making the present scheme applicable for realistic simulations of molecules and nanostructures. C. The mask method In the previous sections we described a practical way to evaluate the momentum-resolved PES following the spatial partitioning of Fig. 1 (a) and how this can be conveniently cast in the language of TDDFT. In this section we take a step further in developing an efficient time evolution scheme by exploiting the geometry of the problem together with some physical assumptions. We start by introducing a split-evolution scheme: At each time t we implement a spatial partitioning of Eq. (8) as following where M (r) is a smooth mask function defined to be 1 deep in the interior of region A and 0 outside, as shown in Fig. 2. Such a mask function, along with the partitions A and B, introduces a buffer region C (technically handled as the outermost shell of A), where ψ A,i (r, t) and We can set up a propagation scheme from time t to t as following (13) where U (t , t) is the time propagator associated with the full Hamiltonian including the external fields. Equation (13) defines a recursive propagation scheme completely equivalent to a time propagation in the whole space A ∪ B. In typical experimental setups, detectors are situated far away from the sample and electrons overcoming the ionization barrier travel a long way before being detected. During their journey toward the detector, and far away from the molecular system, they practically evolve as free particles driven by an external field. It seems therefore a waste of resources to solve the full Schrödinger equation for the traveling electrons while their behavior can be described analytically. In addition, an ideal detector placed relatively close to the molecular region would measure the same PES. From these observations we conclude that we can reduce region A to the size of the interaction region and assume electrons in B to be well described by noninteracting Volkov states. Volkov states are the exact solution of the Schrödinger equation for free electrons in an oscillating field. They are plane-waves and are therefore naturally described in momentum space. In the velocity gauge the Volkov time propagator is formally expressed where the time-ordering operator is omitted for brevity and A(τ ) is the vector potential. This is equivalent to the use of a strong-field approximation in the outer region in the same spirit of the Lewenstein model. 46 In summary, the method we propose consists in solving numerically the real-space TDDFT equations in A and analytically propagating the wavefunctions residing in B in momentum space. In this setup region C acts as a communication layer between functions in A and B. Under this prescription, and by handling B-functions in momentum space, Eq. (13) becomes with At each time step the orbital ψ A,i is evolved under the mask function and stored in η A,i , forcing η A,i to be localized in A. At the same time, the components of ψ A,i escaping from A are collected in momentum space bỹ ξ A,i . We then add toξ A,i the contribution of the wavefunctions already present in B at time t by summing up U VψB,i . In order to allow electrons to come back from B to A we include η B,i in A and correct the function in B by removing its Fourier components [second term in Eq. (16d)]. One of the advantages of Eq. (15) is that all the spatial integrals present in η B,i (r, t ) andξ B,i (p, t ) are performed on functions localized in C. Therefore, integrals over the whole space are evaluated at the cost of an integration on the much smaller buffer region C that can be easily evaluated by fast Fourier transform algorithms. Similar considerations hold for integrals in momentum space under the assumption that B-functionsψ B,i (p, t) are localized in momentum. When region A is discretized on a grid, in order to avoid wavefunction wrapping at the boundaries and preserve numerical stability, additional care must be taken. In our implementation, numerical stability is addressed by the use of non-uniform Fourier transforms (see details in Appendix B). There are situations were the electron flow from B to A is negligible. This is the case, for instance, when A is large enough to contain the whole wavefunctions at the time when the external field has been switched off. A propagation at later times will see photoelectrons flowing mainly from A to B. In this situation, η B,i and the corresponding correction term inξ B,i can be discarded. The evolution scheme of Eq. (15) is thus simplified and becomes In the folowing we will refer to Eq. (17) as the "mask method" (MM), and to Eq. (15) as the "full mask method" (FMM). We note here again that, being singleparticle propagations schemes, both MM and FMM are not restricted to TDDFT and can be applied to other effective single-particle theories. As a matter of fact an approach similar to Eq. (17) has already been employed in the propagation of the TDSE equations for atomic systems, 16,17,19 and in TDDFT for one-dimensional models of metal surfaces. 47 We also note that the implementation of absorbing boundaries trough a mask function as done in Eq. (16a) can be cast in terms of an additional imaginary potential (exterior complex scaling) in the Schrödinger equation. Such approach is commonly used in quantum optics. Within the MM, the evolution in A is completely unaffected by the wavefunctions in B and ionized electrons are treated uniquely in momentum space. Compared with the FMM, the MM is numerically more stable as it is not affected by boundary wrapping. In order to achieve the conditions where Eq. (17) is valid may require, however, large simulation boxes. Moreover as the mask function never absorbs perfectly the electrons, spurious reflections may appear. Suppression of such artifacts requires a further enlargement of the buffer region. With FMM spurious reflections are almost negligible. Choosing between MM and FMM implies a tradeoff between computational complexity and numerical stability that strongly depends on the ionization dynamics of the process under study. In what follows, we will illustrate the differences and devise a prescription to help the choice of the most suitable method in each specific case. III. APPLICATIONS In this section we present a few numerical applications of the schemes previously derived. In all calculations the boundary between A and B regions is chosen as a ddimensional sphere implemented by the mask function: as shown in Fig. 2. Note that numerical studies (not presented here) revealed a weak dependence of the final results on the functional shape of the mask. The time propagation of the orbitals in A is performed with the enforced time-reversal symmetry evolution operator 48 where H is the full KS Hamiltonian, and the coupling with the external field is expressed in the velocity gauge. The first system we will study is hydrogen. In spite of being a one-electron system, it is a seemingly trivial case that has been and still is under thorough theoretical investigation. [23][24][25][49][50][51][52] Clearly we do not need TDDFT to study hydrogen and our numerical results are obtained by propagating the wavefunction with a non-interacting Hamiltonian. The interest in this case is focused on the numerical performance of different mask methods as hydrogen provides a useful benchmark. The full TDDFT calculations performed for molecular nitrogen, carbon monoxide, and benzene are later presented in Sect. III B and Sect. III C respectively. In these cases, norm-conserving Troullier-Martins pseudopotentials and the exchange-correlation LB94 potential 53 (that has the correct asymptotic limit for molecular systems) are employed. Finally, in all calculations the starting electronic structure of the molecules is calculated in the Born-Oppenheimer approximation at the experimental equilibrium geometry and the time evolution is performed with fixed ions. A. Photoelectron spectrum of hydrogen As first example we study multi-photon ionization of a one-dimensional soft-core hydrogen atom, initially in the ground state, and exposed to a λ = 532 nm (ω = 0.0856 a.u.) linearly polarized laser pulse with peak intensity I = 1.38 × 10 13 W/cm 2 , of the form where f (t) is a trapezoidal envelope function of 14 optical cycles with two-cycle linear ramps, constant for 10 cycles, and with A 0 = 31.7 a.u. Here A(t) is the vector potential in units of the speed of light c. A soft-Coulomb potential V (x) = −1/ √ 2 + x 2 is employed to model the electronion interaction. We propagate the electronic wavefunction in time and then compare the energy-resolved ionization probability obtained from different schemes. Along with MM, FMM, and SPM we present results for direct evaluation of PES from Eq. (5). In this method the spectrum is obtained by directly Fourier transforming the wavefunction in region B. Since the analysis is conducted without perturbing the evolution of the wavefunction we will refer to it as the "passive method" (PM). This method requires the knowledge of the whole wavefunction after the pulse has been switched off, and since a considerable part of the wave-packet is far away from the core (for the present case a box of 500 a.u. radius is needed for 18 optical cycles), it is viable only for onedimensional calculations. Nevertheless it is important as it constitutes the limiting case for both MM and FMM. In Fig. 3, a color plot of the evolution of the electronic density as a function of time is shown. The electronic wavefunction splits into sub-packets generated at each laser cycle (one optical cycle = 2π/ω = 1.774 fs). These wavepackets evolve in bundles and their slope correspond to a certain average momentum. ATI peaks are then formed by the build up of interfering wavepackets periodically emitted in the laser field and leading to a given final momentum. 54 From Fig. 3 is it possible to see that electrons may be considered as escaped "already" at 30 a.u. away from the center. We set therefore R A = 30 a.u. and calculate energy-resolved PES with the PM. As we can see from Fig. 4 the spectrum presents several peaks at integer multiples of ω following E = sω − I P − U P with U P = A 2 0 /4c 2 = 0.0133 a.u. being the ponderomotive energy, I P = 0.5 a.u. the ionization potential, and s the number of absorbed photons. In this case the minimum number of photons needed to exceed the ionization threshold is s = 6. Of course, the spectrum is only in qualitative agreement with three-dimensional calculations 15 as expected from a one-dimensional soft-core model. [54][55][56] PES calculated from MM, FMM, and SPM all agree as reported in Fig. 4. Numerical calculations were performed until convergence was achieved, leading to a grid with spacing ∆R = 0.4 a.u. and box sizes depending on the method. For MM we employed a simulation box of R A = 70 a.u. and set the buffer region at R C = 30 a.u. In order to have energy resolution comparable with PM we used padding factors (see Appendix B) P = P N = 4 and the total simulation time was T = 18 optical cycles. For FMM a smaller box of R A = 40 a.u. with R C = 30 a.u. is needed to converge results, and P = 8, P N = 2 were needed to preserve numerical stability for T = 18 optical cycles. For SPM two sampling points at r S = −500, 500 a.u. were needed to get converged results with a box of 550 a.u., and a complex absorber 57,58 at 49 a.u. from the boundaries of the box. In addition, a total time of T = 74 optical cycles was required to collect all the wave packets. The need for such a huge box resides on the working conditions of SPM. In order to avoid spurious effects, the sampling points must be set at a distance such that the density front arrives after the external field has been switched off. Therefore the longer the pulse the further away the sampling points must be set. For these laser parameters one could rank each method according to increasing numerical cost starting from MM, followed by FMM, PM, and SPM. As a second example we study the ionization of this one dimensional hydrogen atom by an ultra-short intense infrared laser. We employ a single two-cycle pulse of wavelength λ = 800 nm (ω = 0.057 a.u.), intensity I = 2.5 × 10 14 W/cm 2 , and envelope with N c = 2 and A 0 = 225.8 a.u. Due to the laser strength and long wavelength, the electron evolution shown the one presented before. Electrons ejected from the core are driven by the laser and follow wide trajectories before returning to the parent ion. Such trajectories can be understood in the context of the semiclassical model 11 where released electrons move as a free particle in a timedependent field with a maximum oscillation amplitude of x 0 = 2A 0 /ωc = 57.8 a.u. Electrons ejected near a maximum of the electric field (t) = −∂A(t)/∂t are the ones gaining the most kinetic energy and are therefore responsible for the fast emerging electrons after rescattering with the core. In Fig. 6 we show the energy-resolved PES for different methods. Here the spectra appear to be very far from any ATI structure due to short duration of the laser pulse and is characterized by some irregular maxima and minima. 23 The characteristic features of the ionization dynamics is strongly dependent on the detailed shape of the pulse as one can easily imagine by inspecting the asymmetry in the electron ejection from Fig. 5. Due to these dynamics, a dramatic carrier envelope phase dependence for such short pulses is expected. All the different methods result in similar spectra but with different parameters. In PM we set R A = 50 a.u. and a box of radius R = 700 a.u. is needed to contain the wave function after T = 4 optical cycles (one optical cycle = 2.66 fs). For MM R A = 200 a.u., R C = 40 a.u., and the padding factors are P = 2, P N = 4. Here the value for R A is dictated by the width of the buffer region which needs to be wide enough to prevent spurious reflections. A considerably smaller box is needed for FMM, where R A = 60 a.u., R C = 40 a.u., P = 4, and P N = 2. In this case one can reconstruct the total density in A by evaluating |ψ A (x, t)| 2 via Eq. (15) and compare it to the exact evolution. As one can see in Fig. 7 the reconstructed density displays a behavior remarkably similar to the exact one of Fig. 5 but with a considerably reduced computational cost. SMP requires sampling points at r S = −130, 130 a.u. in a box of radius R = 200 a.u. with 49 a.u. wide complex adsorbers, and for a total time of T = 7 optical cycles. The possibility to use relatively small simulation boxes is especially important for three-dimensional calculations where the computational cost scales with the third power of the box size. Both mask methods are practicable options for 3D simulations and the advantage of using FMM with respect to MM is driven by the electron dynamics. For long laser pulses MM appears to be more stable and it is a better choice than FMM, while for short pulses with large electron oscillations FMM can be more performant. SPM is a viable option for short pulses and small values for the oscillations. As a last example, we present ATI of a real threedimensional hydrogen atom subject to a long infrared pulse. We employ a laser linearly polarized along the x-axis with wavelength λ = 800 nm, intensity I = 5 × 10 13 W/cm 2 , pulse shape of the form (21) with N c = 20 and A 0 = 91.3 a.u. Due to to the pulse length, MM appears to be the most appropriate choice in this case. In Fig. 8 we show a high-resolution density plot of the PAD P (E, θ) defined in Eq. (6). The radial distance denotes the photoelectron energy while the angle indicates the direction of emission with respect to the laser polarization. The color density is plotted in logarithmic scale and represents the values of P (E, θ). The photoelectron energy-angular distribution displays complex interference patterns. The pattern shape compares favorably with similar calculations in the literature. 24,25,44,59 It consists of a series of rings with fine structures. Each ring represents the angular distribution of the photoelectron ATI peaks. The spacing of adjacent rings equals the photon energy ω = 0.057 a.u. Photoelectrons are emitted mainly along the laser polarization, and the left-right symmetry of the rings indicates that the photoelectrons do not present any preferential ejection side with respect to the polarization axis. The first ring corresponds to the angular distribution of the first ATI peak. It presents a peculiar nodal pattern that is induced by the long-range Coulomb potential and is related to the fact that the ATI peak is determined by one dominant partial wave in the final state. 60 The number of the stripes equals the angular momentum quantum number of the dominant partial wave in the final state plus one. 60 In Fig. 8, the first ring contains six stripes and the dominant final state has angular momentum quantum number of 5. The pattern of the energy-angular distribution and the stripe number of the first ring are in good agreement with those in the literature. 24,44 As for the fine structures, we observe that while the main ring pattern is already formed in the first half of the pulse, the fine structure builds up until the end of the pulse. This supports the hypothesis that such structures are induced by the coherence of the two contributions from the leading and trailing edges of the pulse envelope. 44 B. N2 under a few-cycle infrared laser pulse In this section we compare theoretical and experimental angular resolved photoelectron probabilities for randomly oriented N 2 molecules. We choose the laser parameters according to experiment, 28 i.e., we employ a N c = 6 cycle pulse of wavelength λ = 750 nm (ω = 0.06 a.u.), intensity I = 4.3 × 10 13 W/cm 2 . A laser shape for the vector potential should lead to an electric field similar to the one employed in the experiment with zero carrier envelope phase. In Fig. 9 (a) the experimental photoelectron probabil-ityP (E, θ) is plotted in logarithmic scale as a function of the energy and the angle with respect to the laser polarization in the laboratory frame. Electrons are mainly emitted at small angles, and, due to the short nature of the pulse electron emission is asymmetric along the laser polarization axis (at angles close to 0 • and 180 • ). We performed TDDFT calculations for different angles θ L between the molecular axis and the laser polarization. The molecular geometry was set at the experimental equilibrium interatomic distance R 0 = 2.074 a.u. The Kohn-Sham wavefunctions were expanded in real space with spacing ∆r = 0.38 a.u. in a simulation box of R A = 35 a.u. The photoelectron spectra were calculated with FMM having R C = 25 a.u., and padding factors P = 1, and P N = 4. In Fig. 10 the logarithmic ionization probability P θ L (E, θ) is plotted as a function of energy E and angle θ measured from the laser polarization axis for different values of θ L . As the molecular orientation decreases from 90 • ≤ θ L ≤ 30 • we observe an increasing suppression of the emission together with a shift of the maximum that moves away from the laser polarization axis. For θ L = 0 • the emission is highly enhanced for all angles and peaked along the laser direction. The signature of multi-center emission interference has been predicted to be particularly marked when the laser polarization is perpendicular to the molecular axis 61,62 (i.e. θ L = 90 • ). However, the lowest point in energy of such a pattern is predicted for θ = 90 • and E = π 2 /2R 2 0 ≈ 31 eV, way above the energy window of observable photoelectrons produced by our laser. A stronger and longer laser pulse would be required to extend the rescattering plateau toward higher energies and therefore to reveal the pattern. 63 In order to reproduce the experimentalP (E, θ), an average over all the possible molecular orientations should be performed. Due to the axial symmetry of the molecule we can restrict the average to 0 ≤ θ L ≤ 90 • and integrate all the contributions with the proper probability weight 41 We evaluate Eq. (23) by discretizing the integral in a sum for θ L = 0 • , 30 • , 60 • , 90 • , and display the result in Fig. 9 (b). Even in this crude approximation, and without taking into account focal averaging, the agreement with the experiment is satisfactory and compares favorably to the molecular strong field approximation shown in Fig. 9 (c). The agreement deteriorates for low energies where the importance of the Coulomb tail is enhanced as it is not fully accounted due to limited dimensions of the simulation box. As a matter of fact the agreement greatly increases for higher energies. Here θL is the angle between the laser polarization direction and the molecular axis. Laser parameters are the same as in Fig. 9. C. He-(I) PADs for carbon monoxide and benzene In this section we deal with UV (ω = 0.78 a.u.) angular resolved photoemission triggered by weak lasers. When the external field is weak, non-linear effects can be discarded and first order perturbation theory can be applied. In this situation, the momentum resolved PES can be evaluated by Fermi's golden rule as where |Ψ i (|Ψ f ) is the initial (final) many-body wavefunction of the system and A 0 is the laser polarization axis. The difficulty in evaluating Eq. (24) lies in the proper treatment of the final state, which in principle belongs to the continuum of the same Hamiltonian of |Ψ i . In the simplest approach, it is approximated by a plane wave (PW). In this approximation the square root of the momentum-resolved PES is proportional to the sum of the Fourier transforms of the initial state wavefunctions Ψ i (p) corrected by a geometrical factor |A 0 · p| If photoemission peaks are well resolved in momentum, individual initial states can be selectively measured. In this case a correspondence between momentum-resolved PES and electronic states in reciprocal space can be established. The range of applicability of the PW approximation has been discussed in the literature. 5 It has Here we restrict ourselves to photoemission from the highest occupied molecular orbital (HOMO). In this case Eq. (25) becomes the subscript H indicating HOMO-related quantities. We compare ab-initio TDDFT and PW PADs evaluated at fixed momentum |p H | = √ 2E H with E H = ω − E B being the kinetic energy of photoelectrons emitted from the HOMO and E B its binding energy. TDDFT numerical calculations are carried out on a grid with spacing ∆r = 0.28 a.u. for benzene and ∆r = 0.38 a.u. for CO, in a simulation box of R A = 30 a.u.. Photoelectron spectra are calculated using MM with R C = 20 a.u. and padding factors P = 1, P N = 8. A 40 cycles pulse with 8 cycle ramp at the He-(I) frequency ω = 0.78 a.u. and intensity I = 1 × 10 8 W/cm 2 is employed. We begin presenting the case of benzene since it constitutes the smallest molecule meeting all the conditions for Eq. (26) to be valid. Results for molecules oriented according to Fig. 11 (a), evaluated at E H = 0.363 a.u., and two different laser polarizations A 0 =â 1 ,â 2 witĥ a 1 = (1, 0, 0),â 2 = 1/ √ 3 × (1, 1, 1), are shown in Fig. 12. In the case where the laser is polarized along the x axis [see Fig. 12 (b)], PAD presents a four lobes symmetry separated by three horizontal and two vertical nodal lines. This structure is reminiscent of the HOMO π-symmetry with the nodal line at θ = 90 • corresponding to the nodes of the orbital on the x-y plane. Information on the orientation of the molecular plane could then be inferred from the inspection of this nodal line in the PAD. A similar feature can be observed also in the case of an off-plane polarization as shown in Fig. 12 (d). In this case, however, the laser can also excite σ-orbitals and the nodal line at θ = 90 • is partially washed out. The other nodal lines can be understood in term of zeros of the polarization factor |A 0 · p| and are thus purely geometrical. White tics indicate the intersection of the laser polarization axis with the sphere at constant kinetic energy EH . The geometry of the photoemission process is indicated in Fig. 11 (a). A PW approximation of the photoelectron distribution given by Eq. (26) qualitatively reproduce the ab-initio results as shown in Fig. 12 (a) and (d). According to condition (iii) a quantitative agreement is reached only for directions parallel to the polarization axis. A different behavior is expected in the case of CO. Photoelectrons with kinetic energy of E H = 0.261 a.u. are show in Fig. 13. In this case, condition (i) (i.e. πconjugated molecule) is not fulfilled and a worse agreement between ab-initio and PW calculations is expected. The quality of the agreement can be assessed by comparing the left and right columns of Fig. 13. Here, the weak angular variation of |Ψ H (p)| is completely masked by the polarization factor |A 0 · p| [cf. Fig. 13 (a) and (c)]. For this reason no information on the molecular configuration can be recovered from a PW model. The situation is qualitatively different for TDDFT as, in this case, single atom electron emitters are fully accounted for. Here the nodal pattern is mainly governed by the polarization factor, but, however, fingerprints of the molecule electronic configuration can be detected. For instance, when the laser is polarized along the molecular axis, an asymmetry of the photoemission maxima can be observed for directions parallel toâ 1 [see Fig. 12. The molecule is oriented according to Fig. 11 (b) and the photoelectron spectra were evaluated on a sphere at E h = 0.261 a.u. Fig. 13 (b)]. Here the global maximum is peaked around (φ, θ) = (180 • , 90 • ) corresponding to the side of the carbon atom on the molecular axis [cf. Fig. 11 (b)]. These features can be again understood in terms of the shape of the HOMO. For CO, in fact, the HOMO is a σ orbital with the electronic charge unevenly accumulated around the carbon atom. It is therefore natural to expect photoelectrons to be ejected mainly around the molecular axis and with higher probability form the side of the carbon atom. This asymmetry is therefore a property of the electronic configuration of the molecule and gives information about the molecular orientation itself. This behavior appears to be stable upon molecule rotation as can be observed in the case where the polarization is tilted with respect to the molecular axis [A 0 =â 1 , see Fig. 13 (d)]. Even here the nodal structure is mainly dictated by the polarization factor. IV. CONCLUSIONS In this work we studied the problem of photoemission in finite systems with TDDFT. We presented a formal derivation of a photoelectron density functional from a phase-space approach to photoemission. Such a functional can be directly applied to other theories based on a single Slater determinant and the derivation could serve as a base for extensions to more refined models. We proposed a mixed real-and momentum-space evolution scheme based on geometrical splitting. In its com-plete form it allows particles to seamlessly pass back and forth from a real-space description to a momentum-space description. The ordinary splitting scheme turns out to be a special case of this more general method. Furthermore, we illustrated applications of the method on four physical systems: hydrogen, molecular nitrogen, carbon monoxide and benzene. For hydrogen we presented a comparison of the different methods. We studied ATI peak formation in a onedimensional model and ATI angular distributions for a three-dimensional case. The results turned out to be in good agreement with the literature. From the comparison, we derived a prescription to choose the best method based on a classification of the electron dynamics induced by the external field. We investigated angular-resolved photoemission for randomly oriented N 2 molecules in a short intense IR laser pulse. We illustrated the results for four different molecular orientations with respect to the laser polarization. Owing to the symmetry of the problem we were able to combine the results to account for the random orientation. The spectrum for randomly oriented molecules is in good agreement with experimental measurements and is much better than the widely used strong field approximations (with one active electron). 28 We also studied UV angular resolved photoelectron spectra for oriented carbon monoxide and benzene molecules. We presented numerical calculations for two different directions of the laser polarization and compared with the plane-wave approximation. We found that the plane-wave approximation provides a good description for benzene while failing for CO. Furthermore, we found evidence that the photoelectron angular distribution carries important information on molecular orientation. The successful implementation of photoelectron density functional presented in this Article paves the road for interesting applications to many different systems for a wide range of laser parameters. To name a few, TDDFT PAD could provide a theoretical tool superior to the plane-wave and the independent atomic center approximations to retrieve molecular adsorption orientation information from experiments. Atto-second pump probe experiments could be simulated ab-initio accounting for many-body effects but with great computational advantage with respect to full many-body methods and better physical description than SAE pictures. V. ACKNOWLEDGMENTS Special thanks to Lorenzo Stella for many stimulating discussions and suggestions. We also wish to acknowledge useful discussions and comments from Stefan Kurth, Ilya Tokatly, Matteo Gatti and Franck Lépine. Appendix A: Overlap integrals In this section we describe the details of the inclusion of the Kohn-Sham one-body density matrix (9) into Eq. (4). The momentum-resolved photoelectron probability is the sum over all the occupied orbitals of four overlap integrals γ (A1) In order to simplify the notation we drop the orbital index i in the overlap integrals and indicate with v = vv the vector v of modulus v and directionv. In addition, we will consider the simple case where the boundary surface between region A and B is a d-dimensional sphere of radius R A . We start by considering the mixed overlap Fig. 1 (a)]. It is convenient to work in the coordinates v = 2R and r = R + s/2, where the integral takes the form We substitute ψ * B with its Fourier integral representation and after few simple steps we obtain (A5) where we successfully disentangled the integration over v in the second integral. The integral on v > 2R A can be rewritten as an integral over the whole space, which yields a d-dimensional Dirac delta, minus an integral on v ≤ 2R A : where J n (k) is a Bessel function of the first kind. The second term in (A6) is a function centered in −p and strongly peaked in the region w = C d /R A with C 1 = π, C 2 ≈ 3.83, C 3 ≈ 4.49 being the first zeros of the Bessel function J d/2 (k). If the region w is small enough we can consider the integrand in k of (A5) constant and factor out of the integrandψ A (−2p − k)ψ * B (k) evaluated at k = −p. It is easy to see that and, by plugging (A6) in (A5), we have that γ A,B (p) ≈ 0. By the same reasoning we should expect γ B,A (p) ≈ 0. We now turn to the terms containing wavefunction on the same region. In (v, r) coordinates (A8) The product of functions localized in A is not negligible only for r < R A and |v − r| < R A . Since the integral is carried out for v > 2R A we can bound |v − r| from below with R A |2v − r/R A | ≥ R A . This leads to R A ≤ |v − r| < R A which is satisfied only on the boundary of A. Being a set of negligible measure we have γ A,A (p) = 0. Once again, in (v, r) coordinates where the first integration is in region A. Using the localization of ψ B we see that the integral is non-zero only for r > R A and |v − r| > R A . As the integration is for v < 2R A we have that R A ≥ |v − r| > R A and therefore the double integral in Eq. (A10) is zero. Appendix B: Numerical stability and Fourier integrals A real-space implementation of Eq. (15) involves the evaluation of several Fourier integrals. Such integrals are necessarily substituted by their discrete equivalent, and therefore discrete Fourier transforms (FT) and fast Fourier transforms (FFTs) are called into play. However, evolution methods based on the discrete FT naturally impose periodic boundary conditions. While this is not presenting any particular issue for MM where FT are only used to map real-space wavefunctions to momentum space, it is a source of numerical instability for FMM where the wavefunctions are reintroduced in the simulation box. The problem is well illustrated by the following onedimensional example. Imagine a wavepacket freely propagating to an edge of the simulation box with a certain velocity. In MM, when passing trough the buffer region, the packet is converted by discrete FT in momentum space and then analytically evolved as a free particle through the edge of the box. In FMM as the wavefunction evolves in momentum spaces it is also transformed back to real space to account for possible charge returns. In this case, instead of just disappearing from one edge, by virtue of the discrete FT periodic boundary conditions, the same wavepacket will appear from the opposite side. It can be easily understood how such an undesirable event can create a feedback leading to an uncontrolled and unphysical build up of the density. This behavior can be controlled by the use of zero padding. As we know, the Fourier integrals in Eq. (15) involves functions that are, by construction, zero outside the buffer region C. We can therefore enlarge the integration domain (having radius R A ) by a padding factor P , set the integrand to zero in the extended points, obtaining the same result. As a consequence, a wavepacket propagating toward a boundary edge will have to run an enlarged virtual box of radiusR A = R A (2P − 1) before emerging from the other side. In addition, the smallest momentum represented ∆p = 2π/P R A = ∆p/P in the discretizedψ B,i (p, t ) is reduced by a factor 1/P while the highest momentum p max = π/∆r remains unchanged. The price to pay here is an increased memory requirement by a factor P d (where d is the dimension of the simulation box) and is too high for three-dimensional calculations. A possible way to find a better scaling is offered by the use of non-uniform discrete Fourier Transform and companion fast algorithm NFFT. 64,65 NFFT allow for the possibility to perform Fourier integrals on unstructured sampling points with, for fixed accuracy, the same arithmetical complexity as FFT. For a detailed description of the algorithm we refer to the literature. 65 The idea is to use the flexibility of NFFT to perform zero padding in a convenient way. Instead of allocating an enlarged box filled with zeros at equally spaced sample positions, we set only one point at R A P N (here P N is the NFFT padding factor) and evaluate the Fourier integral with NFFT. In this way we gain numerical stability for FMM as long as the wavefunctions are contained in a virtual box ofR A = R A (2P N − 1) at the price of adding a number of points that scales as d − 1 with the dimension of the box. If N d is the number of grid points in the simulation box, in order to perform zero padding with NFFT one needs to add only 2N d−1 points. With this procedure however, not only the smallest momentum ∆p is reduced by a factor 1/P N , but also the highest momentump max = (N/2 + 1)∆p is decreased by the same amount. This turns out to be the limiting factor in the use of NFFT to preserve numerical stability with FMM as the enlargement factor P N has an upper bound that depends on the escaping electron dy-namics. In fact, when we evaluate the back-action term Eq. (16b), we assumeψ B,i (p, t ) to be localized in momentum and, in order to preserve numerical consistency, P N must be limited by the highest momentum contained inψ B,i . A combination of ordinary padding and NFFT padding helps to balance the tradeoff between memory occupancy and numerical stability. Finally, in MM zero padding can be used to increase resolution in momentum.
2012-06-25T16:04:19.000Z
2012-06-25T00:00:00.000
{ "year": 2012, "sha1": "ddbcfaa3d62aa1f677b607eea3c3fb996cde86f3", "oa_license": "CCBY", "oa_url": "https://digital.csic.es/bitstream/10261/244232/1/abtheo.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ddbcfaa3d62aa1f677b607eea3c3fb996cde86f3", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227253556
pes2o/s2orc
v3-fos-license
Effect of the Organic Production and the Harvesting Method on the Chemical Quality and the Volatile Compounds of Virgin Olive Oil over the Harvesting Season Organic production has increasing importance in the food industry. However, its effect on the olive oil characteristics remains unclear. The purpose of this study was to research into the effect of organic production without irrigation, the traditional harvesting methods (tree vs. ground picked fruits), and the harvesting time (over a six-week period) on the oil characteristics. Free acidity, peroxide value, K232, K270, ΔK, total phenols, oxidative stability and the volatile compound profile (by SPME extraction, gas chromatography and mass detection) of olive oils from the Verdial de Badajoz cultivar were analysed. The organic production affected the peroxide value, total phenols, oxidative stability and 34 out of 145 volatile compounds. Its effect was much less strong than that of the harvesting method, which affected severely all the chemical and physical-chemical parameters and 105 out of 145 volatile compounds. Conversely, the harvesting time was revealed as a factor with little repercussion, on the chemical and physical-chemical parameters (only peroxide value was influenced), although it affected 83 out of 145 volatile compounds. The larger content in total phenols in the organic oils than in the conventional ones could explain the increase in oil stability and the differences in the volatile compounds. Introduction Virgin olive oil (VOO) is a valuable product obtained mechanically without any refining processes, so it keeps olive fruit compounds such as antioxidants [1] as well as compounds responsible for its typical colour and flavour. Olive oil flavour depends on the content in bitter-tasting compounds, such as phenolic compounds, but also on the volatile compounds, which are responsible for the typical odour notes and potential defects. The most important VOO volatile compounds are formed through the lipoxygenase (LOX) pathway [2], C5 and C6 LOX compounds being the major contributors to the essential green sensory attribute [3]. When olives are released from the trees (either by falling down spontaneously or by harvesting them), progressive cell disruption takes place, which triggers the LOX pathway [3] and, therefore, the generation of C5 and C6 compounds and the development of the typical olive oil flavour. Factors affecting the activity of the enzymes involved in the LOX pathway, such as the fruit cultivar When mature (maturity index in the orchard ranging between 1 (fruits with green yellowish skin) and 5 (fruits with black skin and <50% purple flesh)), the fruits from the organic orchards were collected from the trees (Organic), whereas the ones from the conventional orchards were collected either from the trees (Conventional) or from the ground (Ground). They were mechanically processed, separately (in different days after proper cleaning to avoid cross-contamination), into oil under the same conditions in a local factory over the harvesting session. The organic production was subjected to the official control established in Spain according to Regulation 834/2007. Oil samples were taken from a tank filled during a week (20.000 L) for the Organic and Conventional oils, or directly from the production line for the Ground oil (which was produced once per week) once a week from the beginning of November to mid-January. Then, the eighteen (three types of oil x six weeks) virgin olive oils were kept at 6 • C and analysed. Chemical and Physical-Chemical Analyses of Oil The so-called quality parameters (free acidity, peroxide value, K 232 , K 270 , and ∆K extinction coefficients), which are taken into account to establish the olive oil categories within the European Union according to the Commission Regulation 2568/91 [16] and subsequent amendments, were determined [16]. The total polar phenol content was determined using the Folin-Ciocalteau colorimetric method [17]. The results were expressed as caffeic acid equivalent in mg·kg −1 oil. The oxidative stability index (induction time expressed in hours) was determined by using an eight-channel 743 Rancimat instrument (Metrohm, Herisau, Switzerland), heating the oil samples (2.5 g) at 100 • C under an air flow of 10 L h −1 [18]. Volatile Compound Analysis The virgin olive oil samples (5 g) were introduced into glass screw top vials, with laminated Teflon-rubber disks in the caps. The vials were left in a water bath at 40 • C for 10 min to equilibrate the volatile compounds in the headspace. Then, a solid-phase microextraction (SPME) needle was inserted through the disk, and a 1-cm 50/30 µm thickness DVB/Carboxen/PDMS fibre (Supelco, Bellefonte, PA, USA) was exposed to the headspace for 40 min while the vial was kept in the 40 • C-water bath. Later, the fibre was transferred to the gas-chromatograph inlet (splitless mode, 250 • C). The chromatographic separation of the compounds was carried out using a HB-5 (50 m × 0.32 mm i.d, 1.05 µm) column (Agilent, Avondale, AZ, USA) placed into a gas-chromatograph (Agilent 6890 series) equipped with a mass spectrum detector (Agilent 5973). The oven temperature was held at 40 • C for 10 min and risen at 3 • C min −1 to a temperature of 160 • C, and then at 15 • C min −1 to a final temperature of 220 • C, where it was held for 10 min (total run time: 64 min). Mass spectra were generated by electronic impact at 70 eV, with a multiplier voltage of 1756 V. Data were collected at a rate of 1 scan s −1 over the 30-300 m/z range. The transfer line to the mass spectrometer was maintained at 280 • C. The Agilent MSD Chemstation software was used. n-alkanes (C5-C18) were analysed under the same conditions to calculate the linear retention indices (LRI). The identification was performed by matching mass spectra (MS) and LRI with those of reference compounds analysed under the same conditions (a total of 62 Sigma-Aldrich reference compounds were used), or with those included in the Flavornet (www.flavornet.org) or NIST [19] databases. Two samples of each oil batch were analysed, and results were expressed as total area counts. Data Analyses A three-way (organic production, harvesting method, and harvesting time) Analysis of Variance (ANOVA) was performed on the data. When a significant effect was found, the Tukey test was carried out to compare the means. A Principal Component Analysis was performed on the mean values for each sample to evaluate the relations among variables and samples [20]. The statistical analyses were performed by means of the SPSS version 22.0. Table 1 shows the results from the three-way ANOVA performed on the data from the chemical and physical-chemical analyses carried out on the Verdial de Badajoz virgin olive oils. The effect of the organic production was moderate (three out of seven parameters were affected), the harvesting method greatly influenced the parameters (all of them were affected), and the effect of the harvesting time was weak (only one out of seven parameters were affected) ( Table 1). In the case of the type of production system (Organic vs. Conventional), the effect was significant on the peroxide value (PV), the total phenols and the oxidative stability of oils (Table 1). The values averaged over the six week period for the Organic and Conventional oils are shown in Figure 1. Compared with the Conventional group, the Organic one had lower values for PV (9.63 ± 1.93 vs. 11.26 ± 2.12 mEq O 2 kg −1 ), and higher for the total phenols (166.7 ± 15.0 vs. 149.0 ± 12.2 mg kg −1 ) and the oxidative stability (26.3 ± 2.1 vs. 22.9 ± 2.0 h). However, most quality parameters (free acidity and the extinction coefficients) were not affected (p > 0.730 for all of them) ( Table 1). was carried out to compare the means. A Principal Component Analysis was performed on the mean values for each sample to evaluate the relations among variables and samples [20]. The statistical analyses were performed by means of the SPSS version 22.0. Table 1 shows the results from the three-way ANOVA performed on the data from the chemical and physical-chemical analyses carried out on the Verdial de Badajoz virgin olive oils. The effect of the organic production was moderate (three out of seven parameters were affected), the harvesting method greatly influenced the parameters (all of them were affected), and the effect of the harvesting time was weak (only one out of seven parameters were affected)( Table 1). In the case of the type of production system (Organic vs. Conventional), the effect was significant on the peroxide value (PV), the total phenols and the oxidative stability of oils (Table 1). The values averaged over the six week period for the Organic and Conventional oils are shown in Figure 1. Compared with the Conventional group, the Organic one had lower values for PV (9.63 ± 1.93 vs. 11.26 ± 2.12 mEq O2 kg −1 ), and higher for the total phenols (166.7 ± 15.0 vs. 149.0 ± 12.2 mg kg −1 ) and the oxidative stability (26.3 ± 2.1 vs. 22.9 ± 2.0 h). However, most quality parameters (free acidity and the extinction coefficients) were not affected (p > 0.730 for all of them) ( Table 1). The lack of a marked effect on most of the quality parameters might be expected as the agronomic practices were very similar (no irrigation in both cases, and little vs. no use of phytosanitary chemicals) except for fertilisation. Currently, there is no general agreement on what the effect of the organic practices on the virgin olive oil quality is, apparently because of the difficulty of dealing with a relatively weak effect without excluding other environmental sources of variation. In this sense, a three-year study on the Leccino and Frantoio olive cultivars showed ), Conventional ( The lack of a marked effect on most of the quality parameters might be expected as the agronomic practices were very similar (no irrigation in both cases, and little vs. no use of phytosanitary chemicals) except for fertilisation. Currently, there is no general agreement on what the effect of the organic practices on the virgin olive oil quality is, apparently because of the difficulty of dealing with a relatively weak effect without excluding other environmental sources of variation. In this sense, a three-year study on the Leccino and Frantoio olive cultivars showed ) and Ground ( The lack of a marked effect on most of agronomic practices were very similar (no i phytosanitary chemicals) except for fertilisation the effect of the organic practices on the virg difficulty of dealing with a relatively weak effec variation. In this sense, a three-year study on non-consistent differences in the oil quality p The lack of a marked effect on most of the quality parameters might be expected as the agronomic practices were very similar (no irrigation in both cases, and little vs. no use of phytosanitary chemicals) except for fertilisation. Currently, there is no general agreement on what the effect of the organic practices on the virgin olive oil quality is, apparently because of the difficulty of dealing with a relatively weak effect without excluding other environmental sources of variation. In this sense, a three-year study on the Leccino and Frantoio olive cultivars showed non-consistent differences in the oil quality parameters and phenol content due to the organic practices, suggesting that genotype and year-to-year variations in climate have a stronger effect [9]. However, both a decrease and an increase in oil quality have been reported in other studies. On the one hand, a decrease in oil quality (free acidity and K 270 ) was reported and attributed to infestation and fungal infection in the organic fruits as a consequence of the absence of pesticides [21]. Likewise, lower phenol contents have been reported in organic Picual and Hojiblanca oils than in the conventional ones [13]. On the other hand, an increase in oil quality or/and in phenol content was also reported as a result of the organic practices. A study on Picual oil showed lower values for PV, greater stability and higher phenol content in the organic oil than in the conventional one [22], which is in line with our results for Verdial de Badajoz oil. More recent research on Leccio and Frantoio olive oils found no effect on the quality parameters but showed an increase in the phenol content, which was attributed to the decreased availability of soil nitrogen when using organic practices, as phenol content increases with decreased soil nitrogen availability [23], suggesting that, under controlled environmental conditions, the effect of the agronomic practices on plant metabolism is clear [8]. Similarly, organic practices on Kolovi olive orchards resulted in an increase in phenols, including luteolin, which was proposed as a marker for organic production [24]. Our results seem to confirm those results and show that the increase in phenols also happens in Verdial de Badajoz oil from unirrigated orchards. The increase in the total phenols (which have antioxidant activity) could explain the increase in oil stability and the decrease in PV. In any case, the moderate effect of the organic production on these parameters might indicate that no large differences might be expected in the volatile compounds. Effect of the Harvesting Method The harvesting method affected all the parameters included in Table 1. Not only did the Ground oils (from ground-picked fruits) reach the highest and therefore worst values in all the quality parameters, but also the lowest in the total phenol content and oxidative stability ( Figure 1). All the Ground oils, taken over the six-week period, exceeded considerably the 0.8% limit for free acidity for the extra virgin olive oil stated in European Commission Regulation (61/2011) [25], values being in the 2.4-5.3 range, and moreover all except the oil produced in week 2 exceeded the maximum allowance at least in one more quality parameter. Conversely, the two types of oils from tree-collected fruits (Conventional and Organic oils) were within the allowance limits for all the quality parameters. These results match those reported in Picual samples [26], with all the oils from ground-picked fruits exceeding the limit for free acidity and all the ones from tree-picked olives being below it. With respect to the total phenol content, ground-picked olives, which are more exposed to microbiological infection than tree-picked olives, might have yielded lower phenol concentrations due to the phenol decline that the microbiological activity causes, in particular in oleuropein derivatives [27]. Furthermore, since phenols protect oil from oxidation [28], a decrease in them would facilitate a decline in the oxidative stability of oil. These results show a considerable repercussion of the harvesting method on the Verdial de Badajoz oil quality over, regardless of the harvesting time, and confirm the fact that fruits collected from the ground result in poor quality olive oils [10,26]. Thus, clear differences in the volatile compound profile between the Ground oils and the others might be expected. (Table 1). PV showed significant fluctuations throughout the six-week period instead of a steady trend (data not shown). Previous studies have reported that increased ripeness causes an undesirable increase in the values of the quality parameters, although with significant fluctuations, and a decrease in the oxidative stability and phenol content [12,29]. However, in our study harvesting was adjusted to real circumstances, where orchards management is set to harvest first the orchards which maturate first, as it is usually done in the commercial mills to achieve the best overall results, which could explain the lack of a clear trend. Our results show that in real conditions the harvesting time itself is not a critical quality factor for the Verdial de Badajoz oil as long as harvesting management is adequately set. The slight effect of harvesting time may anticipate slight changes in the volatile compound profile. Volatile Compounds A total of 145 volatile compounds were identified or tentatively identified in the headspace of the Verdial de Badajoz olive oils: 26 aldehydes, 13 ketones, 30 alcohols, 12 acids, 24 esters, 19 acyclic hydrocarbons, 13 cyclic hydrocarbons, four ethers and four other compounds ( Table 2). The results from the three-way ANOVA show that few compounds were significantly affected by the organic production (34 out of 145), most of them (105 out of 145) were affected by the harvesting method and over half of them (83 out of 145) by the harvesting time (Table 2). Differences appeared not only in the lipoxygenase (LOX)-derived compounds (Table 3) but also in the fermentation (Table 4) and oxidation (Table 5) ones, as well as in all the chemical families of compounds (Table 2). Table 2. Significance levels from a three-way ANOVA performed on the volatile compounds data * from the oil extracted from Verdial de Badajoz fruits farmed organically or conventionally (Prod.) and collected from the threes or from the ground (Meth.) over a six-week period (Week). Regarding the type of production (Organic vs. Conventional), the modest effect found (Table 2) might be expected since the oils were just moderately different in the chemical and physical-chemical parameters (Table 1). Previous work on the effect of organic practices on the volatile compounds of olive oil has reported no consistent differences in a three-year study [9], but also a general rise [13,21], which could indicate that there might be further agronomic factors influencing the results. Conversely, in the case of the harvesting method, the noticeable effect on the volatile compounds might be expected since the oils were markedly different in the chemical and physical-chemical parameters (Table 1). For most compounds, the largest abundances appeared in the Ground oils (Tables 3-5), which could be due to the mechanical damage caused by the drop of the fruits. The damage accelerates the decay process and makes easier the access of microorganisms to the fruits, whose infection results in changes in the volatile compound profile [27]. With regard to the harvesting time, the effect ( Table 2) was stronger than expected taking into account the weak influence on the chemical and physical-chemical parameters (Table 1). In any case, a weak effect of the harvesting time was reported for Picual and Hojiblanca oils from irrigated orchards [13]. Our results for oil from unirrigated orchards suggest that the volatile compounds might be more affected by the harvesting time than what the chemical analyses may reveal. Table 3 shows the results for the most representative C5 and C6 LOX volatile compounds affected by the organic production, the harvesting method, and/or the harvesting time. Those compounds were among the most abundant ones in the Verdial de Badajoz olive oil headspace, as it was previously reported for the oil from this cultivar [30] and others [3,4,10]. Most of those compounds have low odour-thresholds [3] and, therefore, could take part in oil flavour. In fact, C5 and C6 LOX compounds seem to contribute to the positive traits of olive oil [3,10]. The most abundant LOX compounds were (E)-hex-2-enal and (Z)-hex-3-en-1-ol, followed by hexan-1-ol and hexanal. It should be noted that hexanal, besides the LOX pathway, can be generated through oxidation reactions on linoleic acid [3], being involved in the rancid note of food when it appears at high concentrations. LOX-Derived Volatile Compounds According to the ANOVA results (Table 2), the organic production affected eight out of the 13 LOX compounds included in Table 3, the harvesting method 12 out of 13 (all except hexanal), and the harvesting time 12 out of 13 (all except hexan-1-ol). Some LOX compounds were not significantly affected by any factors, such as pentan-2-one and 3-pentanol ( Table 2). Effect of the Organic Production (Organic vs. Conventional) The effect of the organic production was significant on four out of the six C5 compounds included in Table 3, and on four out of the seven C6 ones, according to the ANOVA results ( Table 2). The effect was stronger than expected taking into account the relatively slight influence on the quality parameters (Table 1). Values for the C5 LOX compounds were generally lower in the Organic oils than in the Conventional ones, although in any case the Tukey test revealed only slight differences, especially on week 1. Different trends were found for important C6 compounds: (Z)-hex-3-en-1-ol tended to be more abundant in the Organic oils than in the Conventional ones (differences were significant in week 1, 4 and 6), whereas (Z)-hex-3-enal (weeks 1, 5 and 6), hexanal (weeks 3, 4, 5, 6) and (E)-hex-2-enal (weeks 1, 3, 4, 5, 6) showed the opposite trend. To date, the effect of the organic practices on the volatile compounds has been scarcely studied, and results are not completely consistent. In this sense, our results, from unirrigated Verdial de Badajoz orchards, show a similar trend for hexanal to results for oil from the Leccino and Frantoio cultivars also farmed in unirrigated orchards, although for the other compounds no clear trends were reported [9]. Conversely, higher abundances in hexanal in Organic than in Conventional oils from Picual and Hojiblanca olives from irrigated orchards have been also reported [13]. Therefore, our data might confirm that there is an effect of the organic production on some compounds, such as hexanal, but this effect might depend on other factors, such as irrigation or the olive cultivar. Effect of the Harvesting Method The significant effect of the harvesting method (Table 2) on all the C5 LOX compounds and six (all except hexanal) out of the seven C6 ones included in Table 3 matches the substantial effect found on the chemical and physical-chemical parameters ( Table 1). The C5 LOX compounds were generally more abundant in the Conventional oils (from tree-picked fruits) than in the Ground ones. However, for the C6 LOX compounds a mixed trend was found. (Z)-hex-3-enal, (E)-hex-2-enal and (Z)-hex-3-en-1-ol, which have been related to the green attribute [10], were significantly more abundant in the Conventional group than in the Ground one. Conversely, hexan-1-ol and hexyl acetate tended to be more abundant over time in the Ground oils, and (E)-hex-2-en-1-ol did not show a steady trend. Hexan-1-ol is considered to elicit a no agreeable odour in oil [10] and, therefore, its increase might have a detrimental effect on oil quality. Hexyl acetate, which contributes to the fruity note, is an indicator of ripeness [3], and its precursor (E)-hex-2-en-1-ol [3] has been related to some defects [10,31]. The differences between the Conventional and Ground groups (Table 3) in the LOX compounds increased over time, the Tukey test revealing that it was on week 6 when the most C5 and C6 LOX compounds were influenced by the harvesting method (Table 3). (Z)-hex-3-enal and (E)-hex-2-enal were the compounds most affected, differences being significant in the Tukey test on all the weeks of sampling (Table 3). It should be noted that hexanal was not affected by the harvesting method. This result did not match a previous study reporting an increase in it in oil from ground-picked fruits [26]. However, hexanal content depends on the LOX pathway but also on oxidation reactions, and thus the lack of effect in our study (Table 2) may be explained by a counteracting effect of both pathways. Effect of the Harvesting Time According to the ANOVA results (Table 2), the effect of the harvesting time was significant on five out of the six C5 LOX compounds and six out of the seven C6 ones included in Table 3. It affected all the oil groups to a similar extent ( Table 3). The effect was stronger on these compounds than on the chemical and physical-chemical parameters (Table 1). Most C5 LOX compounds fluctuated over time, without a consistent trend, although pentan-3-one and pent-1-en-3-ol decreased significantly as the season went on (Table 3). A similar pattern was reported for C5 LOX compounds in Arbequina and Chéttoui olive oils [4]. A general decrease was also found for the C6 compounds over time, especially for (Z)-hex-3-enal, (E)-hex-2-enal and (E)-hex-2-en-1-ol (Table 3), which are related to positive flavour traits [10]. These results for Verdial de Badajoz olive oil match previous results on other cultivars [4,29], although it has been pointed out that the decrease in C6 LOX compounds might not affect all cultivars [3]. The decrease over time was more marked in the Ground oils, which might indicate that harvesting late would add to the detrimental effect of harvesting from the ground. Fermentation Compounds According to the ANOVA results (Table 2), the most important compounds related to the microbial activity were hardly affected by the organic practices (only ethanol was affected), but they were greatly influenced by the harvesting method (all the compounds included in Table 4 Effect of the Organic Production (Organic vs. Conventional). Except for ethanol, neither the non-phenolic fermentation compounds (including short-chain acids and alcohols and branched C3 and C4 compounds) nor the volatile phenols were affected by the organic practices (Table 2). This result suggests that differences in the agronomic practices such as fertilisation do not affect to a considerable extent the degradation reactions in which microorganisms can be involved once the fruits are released from the trees, which is in line with the moderate effect on the chemical and physical-chemical parameters (Table 1). Scarce information about the effect of the organic practices on the fermentation compounds is available, since most attention has been devoted to the LOX compounds [9], and no information is available about oils from unirrigated trees. For oil from irrigated orchards, a slight effect on the fermentation compounds was also reported for Hojiblanca (only 3-methylbut-2-en-1-ol was affected, without a consistent trend over time) and Picual oil (only methanol, 2-methylbutanal and 3-methylbut-2-en-1-ol were affected, with larger abundances in the organic oil) [13]. A larger content in 2-methylpropan-1-ol was reported in a group of organic oils than in the conventional ones, but also no differences were found in other fermentation compounds [21]. Our results show that the organic practices have not a noticeable effect on the fermentation compounds of Verdial de Badajoz olive oil from unirrigated orchards, which is partly in line with previous studies on other cultivars and irrigated orchards. Effect of the Harvesting Method Regarding the harvesting method, both the non-phenolic and phenolic fermentation compounds were markedly affected (12 out of 13, and all the phenols, respectively), the compounds being generally more abundant in the Ground oils than in the Conventional ones (Table 4). Almost all the non-phenolic compounds included in Table 4 (all except 2-methylprop-2-enal) were greatly affected by the harvesting method. Most of them were more abundant in the Ground oils than in the Conventional ones, although acetic acid followed the opposite trend. The generally higher values might be caused by the increased mechanical damage in the ground fruits and the subsequent opportunity for microbiological contamination and fermentation to occur. These results are in line with previous work reporting that fermentation compounds such as 2-methylbutan-1-ol and butan-1-ol were more abundant in oil from ground-picked olives than from tree-picked fruits [26]. Most of these compounds have low odour-thresholds [3], and 3-methylbutan-1-ol and short-chain acids have been related to the winey-vinegary and fusty defects [31]. With regard to the volatile phenols included in Table 4, they were all affected by the harvesting method (Table 2), all phenols being more abundant in the Ground oils. For three compounds (2-phenylethanol, 2-ethylphenol, and 4-ethyl-2-methoxyphenol) the differences were significant in the six weekly samplings (Table 4). Our results for Verdial de Badajoz oil are in line with the increase in the volatile phenols reported in Picual oils [26]. The volatile phenols are markers of fruit degradation [32] and, in fact, it has been suggested that 4-ethylphenol is a microbial metabolite from hydroxycinnamic acids [26]. They are abundant in oils with strong fusty, musty and muddy defects [31,33]. In addition to their relatively low odour-thresholds, phenols affect the release of some volatile compounds during consumption [34]. Effect of the Harvesting Time The harvesting time had a significant effect ( Table 2) on most non-phenolic (11 out of 13) and some phenolic (two out of five) fermentation compounds included in Table 4. Most of the non-phenolic compounds (all except 2-methylbutanal and 2-methylpropanoic acid) were affected by the harvesting time ( Table 2). These compounds tended to fluctuate from week to week, although a general increase in ethanol, 2-methylpropanal and 3-methylbutanal was found. For most compounds the highest values tended to appear in weeks 3 and 4. The increase in ethanol over the harvesting season matches a rise in this compound found throughout fruit ripening [13]. This compound, which arises from fruit sugar fermentation and has been proposed as a marker of oil deterioration [13], has been related to the winey-vinegary defect [31]. Likewise, the branched aldehydes increased over time (Table 4). Although no increase was found in Hojiblanca and Picual oils [13], an increase in these undesirable compounds during fruit storage has been reported [4]. These compounds generally possess low odour-thresholds [3] and, in fact, they and their corresponding alcohols and acids are related to the fusty defect [10]. With regard to the volatile phenols included in Table 4, only 2-phenylethanol and 2-ethylphenol were affected according to the the ANOVA results (Table 2), with significant fluctuations over the six weeks consisting of an increase in week 3 and a subsequent decrease (Table 4). To our knowledge no information about the effect of either the harvesting time or fruit ripening on the volatile phenols of olive oil is available. As mentioned above, volatile phenols are related to oil degradation [4]. It was suggested that its formation may depend on the resistance of olives to microbial decay, and they have been related to the free acidity values [35]. In our study there was not a clear change in the quality parameters over time (Table 1) and the only parameter affected (PV) also showed fluctuations instead of a steady increase. Therefore, results for the volatile phenols (Table 4), which fit those for the quality parameters, could confirm that there was not a clear quality loss as the harvesting season elapsed. Oxidation-Derived Volatile Compounds According to the ANOVA results (Table 2), most oxidation compounds included in Table 5 were significantly affected by the organic production (five out of the seven), all of them by the harvesting method, and two out of seven by the harvesting time. Some important oxidation compounds, such as (E)-dec-2-enal and the deca-2,4-dienal isomers, were not affected by any of the researched factors. Effect of the Organic Production (Organic vs. Conventional) Regarding the organic production, the differences found in all the compounds except for heptanal and octanal were larger than expected considering the relatively modest effect found on the chemical and physical-chemical parameters (Table 1). (E)-hept-2-enal and nona-2,4-dienal, which possess low odour-thresholds [3], tended to be more abundant in the Conventional than in Organic oils over time. Nonetheless, there was not a steady trend in the other compounds, without significant differences between the Organic and Conventional oils most of the weeks (Table 5), which suggests that other Previous studies on organic practices have not paid attention to its effect on the volatile oxidation compounds, apart from hexanal, which is also a well-known LOX compound. In any case, (E)-hept-2-enal and nona-2,4-dienal possess low odour thresholds [3]. Effect of the Harvesting Method The effect of the harvesting method was significant on all the oxidation compounds included in Table 5, according to the ANOVA results (Table 2). Heptanal, (E)-hept-2-enal, (E)-oct-2-enal, octanal and nonanal were generally more abundant in the oils from ground-picked fruits than in the ones from tree-picked fruits for all the sampling weeks (Table 4). Conversely, hexa-2,4-dienal and nona-2,4-dienal were less abundant in the Ground oils. Most of those compounds possess low odour-threshold [3]. (E)-hept-2-enal and (E)-2-octenal are among the main contributors to the rancid flavour in oil, and octanal and nonanal are also involved in this sensory defect [31]. In fact, octanal, nonanal and (E)-hept-2-enal are indicators of oxidative degradation [4]. Although oxidation compounds typically arise from oxidation reactions during oil storage, they are also formed as a consequence of fruit microbial activity [10,27], which is favoured when the fruits are collected from the ground. In fact, a higher content in octanal in oils from ground-picked fruits than from tree-picked ones was reported [26], although no information is available about the other compounds. Our results for the oxidation volatile compounds are in line with the marked effect found in the chemical and physical-chemical parameters (Table 1), and confirm for Verdial de Badajoz oil from unirrigated orchards the general rise in oxidation compounds when fruits are ground-collected. Effect of the Harvesting Time Regarding the harvesting time, only two out of the seven oxidation compounds included in Table 5 were affected, according to the ANOVA results (Table 2). There were significant fluctuations and also a slight decrease in hexa-2,4-dienal and (E)-hept-2-enal over the harvesting time regardless of the oil type (Table 5). However, most oxidation markers were not affected, harvesting over a six-week period having a slight influence on the oxidative volatile compounds of the Verdial de Badajoz oils, which could indicate that the harvesting time was suitably scheduled according to orchards ripening. Previous studies on the effect of harvesting during different periods did not included oxidation volatile compounds [13]. The slight effect is consistent with the results for the chemical and physical-chemical parameters, which were hardly affected (Table 1). Therefore, for unirrigated orchards, when timing is adequately set, only slight differences in the oxidation compounds are expected, the organic production and harvesting method having a much more noticeable effect. Principal Component Analysis (PCA) A Principal Component Analysis (PCA) was performed on the variables included in Tables 1 and 3-5 to explore the relationships among them and the general effect of the organic practices, harvesting method and harvesting time. Results show a different distribution of the samples according to the organic practices and harvesting method. The Ground oils were plotted in the positive PC1 semiaxis (Figure 2a), where the quality parameters and most of the fermentation and oxidation compounds had large loadings (Figure 2b). Conversely, the oils from fruits collected from the trees (both the Conventional and Organic oils) appeared in the negative PC1 semiaxis, where the total phenols, oil stability and most LOX compounds reached large absolute loadings (Figure 2b). Figure 2b shows that most LOX compounds were positively related to the total phenol content and oil stability (most of them had negative loadings, generally under −0.5) and negatively related to the quality parameters and most of the fermentation and oxidation compounds (most of them had positive loadings, generally above 0.5). Therefore, factors which favour fermentation or oxidation (for example, collecting the fruits from the ground) seem to hinder the LOX-pathway compounds. Results show a different distribution of the samples according to the organic practices and harvesting method. The Ground oils were plotted in the positive PC1 semiaxis (Figure 2a), where the quality parameters and most of the fermentation and oxidation compounds had large loadings (Figure 2b). Conversely, the oils from fruits collected from the trees (both the Conventional and Organic oils) appeared in the negative PC1 semiaxis, where the total phenols, oil stability and most LOX compounds reached large absolute loadings (Figure 2b). Figure 2b shows that most LOX compounds were positively related to the total phenol content and oil stability (most of them had negative loadings, generally under -0.5) and negatively related to the quality parameters and most of the fermentation and oxidation compounds (most of them had positive loadings, generally above 0.5). Therefore, factors which favour fermentation or oxidation (for example, collecting the fruits from the ground) seem to hinder the LOX-pathway compounds. In addition, the Organic oils tended to reach negative loadings in the PC2, whereas the Conventional ones tended to be displayed in the positive PC2 semiaxis. The variables with the largest absolute loadings in the PC2 were mostly LOX compounds, two oxidation compounds ((E)-hepten-2-al and hexa-2,4-dienal) and two fermentation compounds (2-methylprop-2-enal and 3-methylbutanal), all of them with positive values (Figure 2b). Therefore, these compounds seem to be more related to the Conventional than in the Organic oil according to the variability explained by the PC model. Regarding the harvesting time, samples from the first weeks generally reached lower scores in the PC1 axis and higher in the PC2 axis, whereas the ones from the last weeks tended to follow the opposite trend. Therefore, at the beginning of the harvesting period, the oils tended to have higher scores in the variables positively related to oil quality, such as stability and phenol content, whereas at the end there was a slightly stronger relation to variables considered detrimental to oil quality (quality parameters, and fermentation and oxidation compounds). Conclusions Results show that the organic practices in unirrigated orchards had a noticeable yet commercially modest effect on the chemical and physical-chemical parameters and the volatile compound profile. The effect was much less strong than that of the harvesting method, which affected severely the chemical and physical-chemical parameters, including the quality parameters Results show a different distribution of the samples according to the organic practices and harvesting method. The Ground oils were plotted in the positive PC1 semiaxis (Figure 2a), where the quality parameters and most of the fermentation and oxidation compounds had large loadings (Figure 2b). Conversely, the oils from fruits collected from the trees (both the Conventional and Organic oils) appeared in the negative PC1 semiaxis, where the total phenols, oil stability and most LOX compounds reached large absolute loadings (Figure 2b). Figure 2b shows that most LOX compounds were positively related to the total phenol content and oil stability (most of them had negative loadings, generally under -0.5) and negatively related to the quality parameters and most of the fermentation and oxidation compounds (most of them had positive loadings, generally above 0.5). Therefore, factors which favour fermentation or oxidation (for example, collecting the fruits from the ground) seem to hinder the LOX-pathway compounds. In addition, the Organic oils tended to reach negative loadings in the PC2, whereas the Conventional ones tended to be displayed in the positive PC2 semiaxis. The variables with the largest absolute loadings in the PC2 were mostly LOX compounds, two oxidation compounds ((E)-hepten-2-al and hexa-2,4-dienal) and two fermentation compounds (2-methylprop-2-enal and 3-methylbutanal), all of them with positive values (Figure 2b). Therefore, these compounds seem to be more related to the Conventional than in the Organic oil according to the variability explained by the PC model. Regarding the harvesting time, samples from the first weeks generally reached lower scores in the PC1 axis and higher in the PC2 axis, whereas the ones from the last weeks tended to follow the opposite trend. Therefore, at the beginning of the harvesting period, the oils tended to have higher scores in the variables positively related to oil quality, such as stability and phenol content, whereas at the end there was a slightly stronger relation to variables considered detrimental to oil quality (quality parameters, and fermentation and oxidation compounds). Conclusions Results show that the organic practices in unirrigated orchards had a noticeable yet commercially modest effect on the chemical and physical-chemical parameters and the volatile compound profile. The effect was much less strong than that of the harvesting method, which affected severely the chemical and physical-chemical parameters, including the quality parameters Results show a different distribution of the samples according to the organic practices and harvesting method. The Ground oils were plotted in the positive PC1 semiaxis (Figure 2a), where the quality parameters and most of the fermentation and oxidation compounds had large loadings (Figure 2b). Conversely, the oils from fruits collected from the trees (both the Conventional and Organic oils) appeared in the negative PC1 semiaxis, where the total phenols, oil stability and most LOX compounds reached large absolute loadings (Figure 2b). Figure 2b shows that most LOX compounds were positively related to the total phenol content and oil stability (most of them had negative loadings, generally under -0.5) and negatively related to the quality parameters and most of the fermentation and oxidation compounds (most of them had positive loadings, generally above 0.5). Therefore, factors which favour fermentation or oxidation (for example, collecting the fruits from the ground) seem to hinder the LOX-pathway compounds. In addition, the Organic oils tended to reach negative loadings in the PC2, whereas the Conventional ones tended to be displayed in the positive PC2 semiaxis. The variables with the largest absolute loadings in the PC2 were mostly LOX compounds, two oxidation compounds ((E)-hepten-2-al and hexa-2,4-dienal) and two fermentation compounds (2-methylprop-2-enal and 3-methylbutanal), all of them with positive values (Figure 2b). Therefore, these compounds seem to be more related to the Conventional than in the Organic oil according to the variability explained by the PC model. Regarding the harvesting time, samples from the first weeks generally reached lower scores in the PC1 axis and higher in the PC2 axis, whereas the ones from the last weeks tended to follow the opposite trend. Therefore, at the beginning of the harvesting period, the oils tended to have higher scores in the variables positively related to oil quality, such as stability and phenol content, whereas at the end there was a slightly stronger relation to variables considered detrimental to oil quality (quality parameters, and fermentation and oxidation compounds). Conclusions Results show that the organic practices in unirrigated orchards had a noticeable yet commercially modest effect on the chemical and physical-chemical parameters and the volatile compound profile. The effect was much less strong than that of the harvesting method, which affected severely the chemical and physical-chemical parameters, including the quality parameters Results show a different distribution of the samples according to the organic practices and harvesting method. The Ground oils were plotted in the positive PC1 semiaxis (Figure 2a), where the quality parameters and most of the fermentation and oxidation compounds had large loadings (Figure 2b). Conversely, the oils from fruits collected from the trees (both the Conventional and Organic oils) appeared in the negative PC1 semiaxis, where the total phenols, oil stability and most LOX compounds reached large absolute loadings (Figure 2b). Figure 2b shows that most LOX compounds were positively related to the total phenol content and oil stability (most of them had negative loadings, generally under -0.5) and negatively related to the quality parameters and most of the fermentation and oxidation compounds (most of them had positive loadings, generally above 0.5). Therefore, factors which favour fermentation or oxidation (for example, collecting the fruits from the ground) seem to hinder the LOX-pathway compounds. In addition, the Organic oils tended to reach negative loadings in the PC2, whereas the Conventional ones tended to be displayed in the positive PC2 semiaxis. The variables with the largest absolute loadings in the PC2 were mostly LOX compounds, two oxidation compounds ((E)-hepten-2-al and hexa-2,4-dienal) and two fermentation compounds (2-methylprop-2-enal and 3-methylbutanal), all of them with positive values (Figure 2b). Therefore, these compounds seem to be more related to the Conventional than in the Organic oil according to the variability explained by the PC model. Regarding the harvesting time, samples from the first weeks generally reached lower scores in the PC1 axis and higher in the PC2 axis, whereas the ones from the last weeks tended to follow the opposite trend. Therefore, at the beginning of the harvesting period, the oils tended to have higher scores in the variables positively related to oil quality, such as stability and phenol content, whereas at the end there was a slightly stronger relation to variables considered detrimental to oil quality (quality parameters, and fermentation and oxidation compounds). Conclusions Results show that the organic practices in unirrigated orchards had a noticeable yet commercially modest effect on the chemical and physical-chemical parameters and the volatile compound profile. The effect was much less strong than that of the harvesting method, which affected severely the chemical and physical-chemical parameters, including the quality parameters In addition, the Organic oils tended to reach negative loadings in the PC2, whereas the Conventional ones tended to be displayed in the positive PC2 semiaxis. The variables with the largest absolute loadings in the PC2 were mostly LOX compounds, two oxidation compounds ((E)-hepten-2-al and hexa-2,4-dienal) and two fermentation compounds (2-methylprop-2-enal and 3-methylbutanal), all of them with positive values (Figure 2b). Therefore, these compounds seem to be more related to the Conventional than in the Organic oil according to the variability explained by the PC model. Regarding the harvesting time, samples from the first weeks generally reached lower scores in the PC1 axis and higher in the PC2 axis, whereas the ones from the last weeks tended to follow the opposite trend. Therefore, at the beginning of the harvesting period, the oils tended to have higher scores in the variables positively related to oil quality, such as stability and phenol content, whereas at the end there was a slightly stronger relation to variables considered detrimental to oil quality (quality parameters, and fermentation and oxidation compounds). Conclusions Results show that the organic practices in unirrigated orchards had a noticeable yet commercially modest effect on the chemical and physical-chemical parameters and the volatile compound profile. The effect was much less strong than that of the harvesting method, which affected severely the chemical and physical-chemical parameters, including the quality parameters (which are used in the official oil grading), and the volatile compounds. Conversely, the harvesting time in real conditions was revealed to be a factor with little repercussion on the oil quality parameters, which might be due to a suitable harvesting time schedule, although it had a noticeable effect on some important volatile compounds. Previous studies have not shown a consistent effect of the organic production on the olive oil characteristics, partly because of the difficulty of controlling other sources of variation. Our results, for oil obtained in a commercial mill, reveal an increase in the total phenols in the organic oil, which was previously reported at laboratory scale and attributed to the decreased availability of nitrogen due to the lack of chemical fertilisation. This result, obtained under controlled conditions (e.g., the same mill, the same area, the same harvesting time and method) could explain the increase in oil stability and the changes in the volatile compound profile. Considering that some studies involving irrigated orchards have not shown an increase in phenols, it would be advisable to perform further studies under controlled conditions to shed light on how irrigation may affect the effect of the organic practices, and on whether or not the organic fertilisation currently used in commercially exploited orchards causes a consistent decrease in soil nitrogen availability. That information would help understand the real defect of the organic production and how to deal with it.
2020-12-03T09:05:11.406Z
2020-11-28T00:00:00.000
{ "year": 2020, "sha1": "7a3de653e354d47f43ff60c180f746fbe60e76e6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/foods9121766", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "895966ed1b7fc93975af8b625d37a9f0e2fbdc68", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
10626306
pes2o/s2orc
v3-fos-license
Magnesium batteries: Current state of the art, issues and future perspectives Summary “...each metal has a certain power, which is different from metal to metal, of setting the electric fluid in motion...” Count Alessandro Volta. Inspired by the first rechargeable magnesium battery prototype at the dawn of the 21st century, several research groups have embarked on a quest to realize its full potential. Despite the technical accomplishments made thus far, challenges, on the material level, hamper the realization of a practical rechargeable magnesium battery. These are marked by the absence of practical cathodes, appropriate electrolytes and extremely sluggish reaction kinetics. Over the past few years, an increased interest in this technology has resulted in new promising materials and innovative approaches aiming to overcome the existing hurdles. Nonetheless, the current challenges call for further dedicated research efforts encompassing fundamental understanding of the core components and how they interact with each other to offering new innovative solutions. In this review, we seek to highlight the most recent developments made and offer our perspectives on how to overcome some of the remaining challenges. Introduction Fueled by an ever increasing demand for electrical energy to power the numerous aspects of modern human life, energy storage systems or batteries occupy a central role in driving the electrification of our societies [1]. The basic principles of a battery are rather old; its invention by Allessandro Volta dates back to the eighteenth century [2] (archeological findings in the 20th century even suggest that the first battery was developed in Mesopotamia dating back to 2000 BC, to what is referred to as the "Baghdad battery" [3]). Since its invention, and most particularly in the twentieth century, advancements in energy storage technologies continued to evolve over time resulting in a myriad of distinct batteries and energy storage chemistries [1]. Out of the several known battery technologies, secondary or rechargeable batteries, such as nickel metal hydride and lithium-ion, which allow for reversibly storing and harnessing power on demand while providing high power and energy conversion efficiencies, have played an invaluable role in driving the evolution of new technologies. Nowadays, their usage as an integral part in several modern applications on a variable size scale is apparent, encompassing miniature and portable devices; such as in cell phones, laptops, medium scale; such as in hybrid (HV), plug-in hybrid (PHEV) and electric vehicles (EV) up to large scale stationary and grid applications [1,4]. As one of the scalable battery systems, lithium-ion batteries have been at the forefront in attracting great interests since the great discovery and ingenious use of Li-ion intercalation compounds as negative electrodes [1]. Although the capacities (measure of electrons number obtained from the active material) offered by most common lithium-ion intercalation compounds are lower than those provided by the Li metal (i.e., 372 mAh g −1 , 837 mAh cm −3 for LiC 6 vs 3862 mAh g −1 , 2061 mAh cm −3 for Li metal), their specific energy densities were proven to be more competitive than that of other rechargeable batteries, such as nickel (Ni)-metal hydride, Ni-cadmium (Cd), and lead (Pb)-acid (about 2.5 times). They also provide higher specific power and have had long durability [1]. The fascinating advancements in Li-ion batteries have resulted in a state of the art battery which uses graphitized carbon as the anode, a transition metal oxide as the cathode, coupled such that 240 Wh kg −1 , 640 Wh L −1 are provided for thousands of cycles [1]. The wide spread use of Li-ion battery, has been and remains a testament for the numerous breakthroughs and technical advancements made thus far. One of the main challenges that current rechargeable battery technologies face is their inability to maintain energy and power densities sufficient to meet those demanded by their applications. In fact, the gap between the energy storage needs and what state of the art systems are capable of providing is increasing. This ever increasing gap has been a persistent force that drove many of the innovations made over the last 40 years [1]. For example, lithium batteries using lithium metal anodes have attracted attention as a candidate to fill up the aforementioned gap. However, this system suffers from the intrinsic property of lithium to form needle-like lithium crystals, known as dendrites, when it is plated. These grow with subsequent plating/stripping cycles, resulting in an internal short circuit and fire hazards [5,6]. While effective countermeasures are still being discussed [6], the birth of the first commercial Li-ion battery in the early 1990s was catalyzed by the need to overcome these challenges. This resulted in a decline in further technical progresses and commercialization of what was referred to as the "ultimate lithium metal anode". If we wish to move forward towards achieving an ultimate energy density goal, technologies beyond Li-ion batteries would be needed. Fortu-nately, in recent years, such desire has led to an increased interest in other chemistries that employ metals poised to provide higher energy densities without compromising the safety of the battery. For example metals such as magnesium and aluminum were proposed [1,7]. Magnesium metal has been attracting an increased attention as it possesses higher volumetric capacities than lithium metal, i.e., 3832 mAh cm −3 vs 2061 mAh cm −3 for lithium. It may also provide an opportunity for battery cost reductions due to its natural abundance in the earth crust (5th most abundant element) [7,8]. More importantly, despite the fact that magnesium metal is not competitive with lithium metal on both specific capacity (2205 mAh g −1 vs 3862 mAh g −1 for lithium) and redox potential levels (−2.3 V compared to −3.0 V for Li vs NHE), the electrochemical processes related to its reversible plating/stripping have demonstrated the absence of dentrites formation which has thus far alleviated safety concerns related to employing it as a negative electrode in batteries [9]. However, several technical challenges that hamper the commercialization of rechargeable magnesium batteries are currently present. In fact, the absence of practical electrolytes and cathodes has confined demonstrations of rechargeable magnesium batteries to research laboratories. That is, low gravimetric energy densities in the order of few hundreds watt hour per kilogram and a limited shown durability coupled with very sluggish kinetics make magnesium batteries currently far from being practical. Fortunately, critical technical advancements geared towards overcoming the existing hurdles are made continuosly [7,9]. These, along with past and future dedicated research efforts, would play a vital role in enabling the maturity and readiness of rechargeable magnesium battery technologies. Herein, a technical review of rechargeable magnesium batteries is provided with focus on the most recent scientific advancements. We provide a brief summary of past breakthroughs as they were comprehensively reviewed elsewhere [7][8][9][10]. Keeping in-line with high academic quality, nonpeer reviewed articles, patents and conference abstracts are not included. As the battery is a complex system employing several components, the review will individually address progresses related to the major components which are the anode, the electrolyte and the cathode. For each of these components, the existing hurdles are individually outlined and our suggestions for future research needs are provided. Review 1 Magnesium battery anodes Since demonstrating the first rechargeable magnesium battery, magnesium metal has been viewed as an attractive battery anode due to the desirable traits outlined in the Introduction. Nonetheless, the undesirable reactivity of this metal coupled with a relatively highly reducing electrochemical environment remains a source of several challenges as explained in subsec- Figure 1: Schematic depicting a simplified image of metal-electrolyte interfaces for magnesium and lithium metals. The magnesium metal case; unlike the lithium, experiences a blocking layer formation when exposed to conventional electrolytes, i.e., ionic salts and polar solvents. No Mg passivation (bare Mg) occurs in ethereal organo-magnesium electrolytes. tion 1.1 Aiming at overcoming these, magnesium ion insertion anodes have been recently proposed and demonstrated. These are explained in subsection 1.2. The magnesium metal anode When discussing the magnesium metal, the nature of its interaction with the electrolyte represents an important and complex topic. That is, interfaces formed on the metal resulting from metal-electrolyte interaction have a direct impact on electrochemical properties related to the dissolution and plating of the metal, i.e., discharge and charge of the battery. Therefore a discussion of the magnesium metal anode is primarily that of its interactions with the electrolytes. In fact, it is well established [7,[9][10][11] that the formation of a surface layer as a result of metal-electrolyte chemical/electrochemical interaction is detrimental for reversible magnesium deposition, as it blocks the diffusion of the magnesium ions thereby preventing reversible electrochemical dissolution and plating from taking place (for illustration see Figure 1). While the nature of this "blocking" layer has not been fully established, its formation was explained by the instability of the electrolytes in proximity of the magnesium metal [11], namely electrolyte decomposition occurred. The passivating nature of this layer is astonishingly in stark contrast to what is observed when analogous electrolytes are in contact with lithium metal as the layer formed, referred to as SEI or solid electrolyte interface, allows for lithium ion diffu-sion and was proven critical in preventing further decomposition of the electrolyte in the highly reducing environment during lithium plating [5,6]. The challenge resulting from the electrolyte decomposition at the interface of the magnesium metal has plagued the development of electrolytes for rechargeable magnesium batteries. For example, simple ionic magnesium salts such as perchlorates and tetrafluroborates were deemed unsuitable as they formed a blocking layer on the magnesium metal [9][10][11][12]. Polar aprotic solvents such carbonates and nitriles also formed a blocking layer on the magnesium metal [9][10][11]. This exacerbated the challenge of electrolytes development as it limited the choices of electrolytes to a handful of organo-magnesium reagents-solvents combinations which were found to suffer from several disadvantages as described in section 2. Therefore, the discovery of new electrolytes that are compatible with rechargeable magnesium batteries and carry the promise of overcoming the existing hurdles represents an important milestone in the magnesium battery R&D. Section 2 provides a review of a variety of new promising electrolytes which we have categorized based on their type and physical state. An important property related to the electrochemical plating of magnesium is the morphology of the magnesium deposits. Although reports related to this topic are scarce [9], they show the absence of dendritic formations following magnesium plating from organohalo-aluminate electrolytes. A recent systematic study examined the morphology of the magnesium deposits from a magnesium organohalo-aluminate complex function of deposition current densities. Although no dendritic morphologies were observed as in Figure 2, the preferred orientation of the deposits was found to depend on the current densities. For example, the deposits obtained at low current densities exhibited the (001) preferred orientation while the (100) was favored at high current densities [13]. This suggested that crystals growth of deposited magnesium is determined by the thermodynamic stability and the diffusion rates of Mg ions. Magnesium ion insertion anodes In order to overcome limitations of the electrolytes induced by their reactivity with the magnesium metal, insertion type anodes were proposed as one potential solution. As described below, magnesium insertion anodes did offer the opportunity of using electrolytes made from magnesium ionic salts in polar aprotic solvents. However, they are currently faced with challenges caused by extremely sluggish magnesium insertion/ extraction kinetics and electrode pulverization due to volume change. The use of insertion anodes was reported by Arthur et al. [14] who sought to demonstrate the possibility of electrochemical reversible insertion/extraction of magnesium ion into Bi, Sb, Bi 0.88 Sb 0.12 and Bi 0.55 Sb 0.45 alloys at potentials less than 0.4 V vs Mg using an organohalo-aluminate/tetrahydrofuran electrolyte. While the highest initial specific capacity at 1 C rate was reported for the Bi 0.88 Sb 0.12 (298 mAh g −1 ), it dropped to 215 mAh g −1 after 100 cycles. The smallest capacity fade with cycling was observed for the Bi anode (pulverization due to volume expansion during magnesium insertion was observed). They also provided a proof of concept for the possibility of magnesium ions insertion/extraction into Bi from magnesium bis(trifluoromethansulfonyl)imide, Mg(TFSI) 2 , in acetonitrile solvent, which are known to form a blocking layer on the magnesium metal. Reaction mechanisms of magnesium ion insertion/extraction into these anodes are currently under investigation as the interfaces likely formed on the anode surface are non-or just partially blocking. Motivated by improving the capacity and lowering the insertion/extraction voltages of the magnesium ion, Singh el. al. [15] utilized Sn to demonstrate reversible and comparable anode performances in both organohalo-aluminate/tetrahydrofuran and Figure 3c and 3d are reprinted with permission from [16]. Copyright 2013 American Chemical Society. Mg(TFSI) 2 /acetonitrile electrolytes (Figure 3a). The first insertion cycle showed a magnesiation capacity close to the theoretical value (903 mAh g −1 vs 384 mAh g −1 for Bi, ran at 0.005 C), a low working potential (0.15 V vs Mg) and lower hysteresis than that afforded by the Bi (50 mV vs 90 mV). Pulverization due to substantial volume expansion during magnesium insertion was also observed. A major challenge with these anodes is the low capacities obtained even at relatively low cycling rates. For example, the capacity when magnesium was inserted at 0.05 C rate into Bi and Sn was maintained at 70% and 20% of the theoretical values, respectively (Figure 3b). Enhancement of magnesium ion solid state diffusion during the insertion/extraction process is expected to increase the reaction kinetics and improve the capacity retention. Shao et al. [16] recently reported a Bi anode with improved rate capabilities and capacity retention using Bi nanotubes ( Figure 3c). The idea was to reciprocate the improved diffusion rates observed for Li ion insertion into nanostructured anodes; i.e., Si and Sn [17,18]. The Bi nanotubes particularly displayed improved rate capabilities, for example, when cycled at a 5 C rate, about 60% of the theoretical capacity was obtained (note that capacity retention was only shown for few cycles). Operation at 0.05 C resulted in a minimal capacity fade of 7.7% after 200 cycles. This was despite the fact that these nanotubes did not retain their structure and converted into what was described as interconnected nanoparticles upon the 1st magnesiation ( Figure 3d). Interestingly, in a control experiment, the capacity retention of the nanotubes was found to be higher than that of Bi nanoparticles (fade of 16.2% after 200 cycles). Further studies examining the evolution and nature of structural and morphological transformations during magnesium ion insertion/extraction cycles would be desired. Perspectives on future developments of magnesium battery anodes When it comes to discussing the magnesium metal, the topic is mainly about the nature of the interfaces formed. Under-standing these interfacial layers, as new electrolytes are proposed, goes into the heart of enabling practical rechargeable magnesium batteries. That is, the future knowledge gained may result in discovering or even designing appropriate SEIs. This is important, as one should not forget the role the SEIs play in minimizing the decomposition of the electrolytes in Li based batteries, thereby having a direct impact on the durability of these batteries. Also, it is essential that the morphologies of the deposited magnesium, function of the electrolyte, current density and prolonged cycling continue to be examined especially as new electrolytes are emerging. Since the great success of Li-ion batteries resulted from replacing lithium metal with the graphite anode, a similar fate may await magnesium batteries that use Mg-ion insertion anodes. What is unique about magnesium-ion insertion anodes is the possibility to reversibly insert/extract magnesium ions in conventional ionic magnesium salts, such as Mg(TFSI) 2 , dissolved in a variety of organic solvents. While the reason for this behavior has yet to be determined, one plausible explanation could be related to the thermodynamic potential of magnesium ion insertion into the host matrices. It may be possible that it occurs at higher electrochemical potentials than that of magnesium plating. Although the discovery and optimization of new materials are certainly required, several properties would need to be carefully examined in order for these anodes to become practical. First of all, it would be crucial that potential applications are considered as anodes are being developed given the very low gravimetric and volumetric capacities compared to magnesium metal. Also, the capacities of these anodes should be taken into account in the value proposition of the overall system. The second relates to the sluggish kinetics induced by the slow diffusion of magnesium ions. Indeed, the Li-ion battery literature is rich with innovative strategies proven effective in increasing the rate capabilities, some of which might be adoptable to magnesium insertion type anodes. The third point relates to examining the presence and nature of possible insertion anode-electrolyte interfaces which may form electrochemically/chemically. Not only these impact the rate of magnesium ions insertion/extraction, but also provide valuable insight into potential interfaces that may enable facile magnesium ion diffusion, that up to this point, remain unknown. Magnesium battery electrolytes: State of the art and design guiding principles The earliest report on a magnesium battery electrolyte that enables reversible electrochemical dissolution/plating of magnesium dates back to the 1990s. Gregory et al. [12] proposed several electrolytes for a rechargeable magnesium battery initially guided by earlier reports on successfully plating magnesium metal from the electrolysis of Grignard reagents. These included Grignard, aminomagnesium chlorides and organoborate reagents in ethereal solvents. They screened electrolytes based on the possibility of reversibly electrodepositing/ stripping magnesium metal and intercalating magnesium ions into host compounds which served as cathodes. The results were used to guide the selection of the most promising electrolytes subsequently used in demonstrating the first rechargeable magnesium battery. Key findings included: 1) Ionic salts such as Mg(BF 4 ) 2 and Mg(ClO 4 ) 2 enabled reversible magnesium insertion into host materials, however formed passivating film on the magnesium metal. This observation led them to correlate the ionicity of the salt, measured by the partial charge of the magnesium ion, to its compatibility with the magnesium metal, i.e., salts with higher charge on the magnesium ion show low or no compatibility with magnesium. 2) Alkyl Grignard reagents had undesirable chemical reactivity towards the cathodes and were deemed inappropriate for battery demonstrations. 3) Some of the organoborates (magnesium dibutyldiphenyl Mg(BPh 2 Bu 2 ) 2 and tributylphenyl Mg(BPhBu 3 ) 2 ) supported reversible magnesium stripping/plating and Mg ions insertion into cathodes. These were also chemically inert towards the cathodes and had a high solubility in tetrahydrofuran (THF) solvent (>0.4 molar). Other organoborates were excluded from further studies due to their reactivity with the cathode (Mg(BBu 4 ) 2 ) or low solubility (Mg(BPh 3 Bu) 2 Mg(BPh 2 Bu 2 ) 2 was used in the first demonstration of a rechargeable magnesium battery. Unfortunately the battery was operated at less than 2 V due to the low stability of Mg(BPh 2 Bu 2 ) 2 against electrochemical oxidation. Substitution of the boron with aluminum or the hydrogen in the aromatic rings with fluoride (as was demonstrated recently [8]) was proposed to help enhance its oxidative stability. Note that a recent report by Muldoon et al. [19] confirmed the low solubility of Mg(BPh 4 ) 2 , Mg(BPh 3 Bu) 2 and found that Mg(BPh 3 Bu) 2 had similar oxidative stability and magnesium metal compatibility as Mg(BPh 2 Bu 2 ) 2 . In the early 2000, Aurbach et al. reported a breakthrough which constituted preparing an electrolyte with higher oxidative stability (2.5 V vs Mg) than the organoborates (1.9 V vs Mg for Mg(BPh 2 Bu 2 ) 2 ) by combining a Grignard reagent with aluminum-based Lewis acids such as AlCl 3−n R n ; where R was an alkyl [20]. Their concept was to strengthen the Mg-C bond in the Grignard reagent, through increasing its ionic character, by adding an electron withdrawing Lewis acid. The optimized compositions of the organohalo-aluminate electrolytes enabled highly reversible magnesium deposition/stripping (100% coulombic efficiency) and insertion into host cathodes with faster insertion kinetics than the organoborates [21,22]. Their approach of using a Lewis base/Lewis acid combination to prepare magnesium battery electrolytes provided a foundation that was used to prepare other organohalo-aluminate electrolytes with high stability against electrochemical oxidation. Subsequent extensive studies by the same group reported other electrolytes based on combining Grignard reagents with other Lewis acids, such as those boron based. The electrochemical performances for the ones based on aluminum Lewis acids outperformed those boron based [22]. Later reports by Aurbach et al. demonstrated another organohalo-aluminate electrolyte that, while possessing the optimized electrochemical performance of those reported previously, had an impressive stability against oxidation exceeding 3.0 V vs Mg. The idea was to remove the source of β-H elimination, believed to be causing the lower oxidative stability in previous electrolytes, by exchanging the Grignard alkyl ligand with a phenyl group [23]. More recently, other organohalo-aluminate electrolytes with high oxidative stability were reported by other groups. Examples included adding AlCl 3 to less nucleophilic amidomagnesium chloride (hexamethyldisilazide) [24]; previously known to allow for reversible magnesium deposition/stripping [25]. Kim et al. [24] found that the crystalized product outperformed the in situ produced electrolyte (oxidative stability of up to 3.2 V vs Mg and higher magnesium deposition/stripping current densities). Another approach used a phenylmagnesium chloride combination with a boron-based Lewis acid in tetrahydrofuran such as tris(pentafluorophenyl)borane [8] or (tri(3,5dimethylphenyl borane) [26] to form stable electrolytes up to 3.7 V and 3.5 V, respectively. Unfortunately, all these electrolytes, while demonstrated with impressive electrochemical stability windows, reversible magnesium dissolution/deposition properties, and high bulk conductivity (i.e., 2 mS cm −1 ), share several critical draw backs which are: 1) The presence of chloride; which is an integral part in the make of these salts/ complexes. This was found to cause severe corrosion of nonnoble metals that becomes apparent at potentials exceeding 2 V vs Mg [7,8,26]. This is problematic as it prohibits using materials such as steel or aluminum as current collectors when using these electrolytes. 2) Tetrahydrofuran is the preferred solvent which is undesirable due to its high volatility and tendency to form peroxides. Aurbach et al. [22] demonstrated optimized compositions obtained from mixing the electrolytes they developed with less volatile ethers such as tetraglymes. However, tetrahydrofuran was still part of the best performing electrolytes; albeit in lesser amounts. 3) Although no systematic studies addressing the extent of the electrolytes' air sensitivity exist, it is likely that they would degrade following exposure to air. Motivated by overcoming the above problems, research efforts recently started shifting from typical organohalo-aluminates/ organoborate-based electrolytes, and the discovery of new systems belonging to a variety of different reagents became of interest. In the next subsections, we review and present these new electrolytes based on their type and physical state. Table 1 summarizes the properties of representative electrolytes classified based on their types. Only those that enable highly reversible magnesium deposition and stripping (i.e., >80% coulombic efficiency) are shown. Liquid electrolytes Given the reactivity of magnesium metal towards most solvents such as carbonates, sulfoxides and nitriles, ethers have been the solvents of choice. New liquid electrolytes are reviewed below with emphasis on those that are tetrahydrofuran-free. We also summarize recent information reported on the nature of the electroactive species in typical organohalo-aluminates and in some of the new electrolytes. Inorganic ionic salts: Until very recently, it has been generally accepted that simple ionic salts such as Mg(TFSI) 2 and Mg(ClO 4 ) 2 are incompatible with magnesium metal (see the Introduction section). Motivated by solving the corrosion problem caused by chloride ions and eliminating tetrahydrofuran as a solvent/cosolvent, Mohtadi et al. [27] proposed a magnesium borohydride based electrolyte for the magnesium battery. The premise of their concept was that the BH 4 − ion, being a relatively strong reducing agent, could withstand the reducing environment of the magnesium anode. Their results demonstrated the first inorganic, halide free, and relatively ionic salt that could reversibly deposit and strip magnesium using magnesium borohydride. Indeed, the work confirmed that ionic salts could be made compatible with the magnesium metal if the anion in the salt has sufficient reductive stability (note that this was also the first time to show Mg plating possibility in a BH 4 − -containing system, as an old report on Mg plating using electrolysis (on Cu cathode and Al anode) of a MgBr 2 , LiBH 4 mixture in diethylether/tetrahydrofuran showed lots of boron impurities, likely generated from the electrolysis side reaction. No information supporting Mg(BH 4 ) 2 formation were given [38]). Mohtadi et al. [27] also developed a magnesium borohydride-lithium borohydride electrolyte in dimethoxyethane (DME) solvent with a reversible magnesium deposition/ stripping at high coulombic efficiency (94%), high current densities (25 mA cm −2 stripping peak current) and low deposition overpotentials (−0.3 V) as shown in Figure 4. The stability against electrochemical oxidation was 1.7, 2.2 and 2.3 V (vs Mg) on platinum, stainless steel and glassy carbon electrodes, respectively. As the borohydride electrolytes are not corrosive, these stability trends are opposite of those observed for other magnesium electrolytes. The higher stability of the borohydride on a non-noble metal suggests catalytic effects of platinum on BH 4 − decomposition. Until this point, the borohydride electrolytes remain the only ionic and halide free salts that are highly compatible with magnesium metal. Non-Grignard-based haloaluminate reagents: In order to increase the stability of the electrolytes in air, avoiding the use of Grignard reagents is needed (i.e., RMgCl or R 2 Mg Lewis base). Wang et al. [29] used phenolates to prepare new electrolytes (ROMgCl) with improved air stability, i.e., due to the stronger bond in Mg-O compared to Mg-C. Three phenolate electrolytes exhibiting good Mg reversibility were prepared, however the conductivity and electrochemical oxidative stability were dependent on the alkyl group. The highest conductivity and oxidative stability measured on a platinum electrode were observed for a 0.5 M 2:1 2-tert-butyl-4methylphenolate magnesium chloride:AlCl 3 in tetrahydrofuran at 2.56 mS cm −1 and 2.6 V vs Mg, respectively. Reversible magnesium deposition/stripping, albeit with an increased overpotential, was observed for the same electrolyte following exposure to air for three hours. A new systematic study by Nelson et al. [30] examined the oxidative stability of phenolates as a function of the substituents on the phenyl ring. Several electrolytes were prepared with electron withdrawing (pentafluoro, trifluoromethyl) or donating (methoxy) substituents. Oxidative stability, measured on a platinum electrode, of up to 2.9 V vs Mg was obtained for a 2:1 4-(trifluoromethyl)-phenolate magnesium:AlCl 3 in tetrahydrofuran. This electrolyte supported reversible magnesium deposition/stripping and had a high conductivity (2.44 mS cm −1 ). However, some degradation in the electrochemical performance was observed following exposure to air for six hours (i.e., lower current densities and higher overpotentials). Unfortunately, this suggested the instability of the phenolates upon prolonged exposure to air. Electrolytes prepared by the replacement of the phenolates with alkoxides were reported by Liao et al. [31], who prepared three new butoxy and siloxy based electrolytes. Their interest was to access the vast numbers of ligands offered by the alkoxides such that electrolytes with improved oxidative stability could be prepared. In the absence of AlCl 3 Lewis acid, the alkoxides had higher solubility in tetrahydrofuran than the phenolates and supported reversible Mg deposition/stripping. However, the addition of AlCl 3 was necessary to improve their oxidative stability (one sixth an equivalent AlCl 3 was added to mitigate its negative impact on the solubility of the alkoxides). For example, the addition of AlCl 3 increased the oxidative stability of Me 3 SiO-MgCl from 1.95 to 2.5 V vs Mg (on a platinum electrode). Both phenolate-and alkoxide-based electrolytes supported reversible magnesium ion insertion in the Chevrel phase Mo 6 S 8 cathode [29][30][31]. As mentioned in the introduction, Kim et al. [24] reported a less nucleophilic 3:1 (hexamethyldisilazide)MgCl:AlCl 3 electrolyte where the crystallized product had an oxidative stability of 3.2 V vs Mg on a platinum electrode (note that crystallization was necessary to achieve this performance). More recently, Zhao-Krager et. al [32], also motivated by the lower nucleophilicity of sterically hindered amides, used magnesium bisamides to prepare two electrolytes by reacting magnesium bis(diisopropyl)amide (iPr 2 N) and magnesium bis(hexamethyldisilazide) (HMDS) with two equivalents of AlCl 3 . As shown in Figure 5, the HMDS based electrolyte (both as prepared and crystallized) exhibited the best electrochemical performance and had a higher oxidative stability (3.3 V vs Mg) than the iPr 2 N based. Interestingly, the structure of the crystallized material obtained from the Mg(HMDS) 2 :2AlCl 3 was the same as that reported by Kim et al. [24] for the (HMDS)MgCl:AlCl 3 . Another recent progress on using non-Grignard halo-aluminate electrolytes was reported by Doe et al. [33], who showed the possibility of magnesium deposition/stripping at high coulombic efficiencies simply from MgCl 2 , AlCl 3 mixture in tetrahydrofuran. Similar results were concurrently reported by Liu et al., who also showed the mixture to have a very low nucleophilicity [34]. Unfortunately, the MgCl 2 electrolytes were found to be very corrosive; i.e., stability on stainless steel was as low as 1.8 V vs Mg [34]. What is notable about the Mg(HMDS) 2 :AlCl 3 and MgCl 2 :AlCl 3 systems is that the in situ products exhibited wide electrochemical windows and high electrochemical performances thereby eliminating the necessity of additional crystallization steps. 2.1.3 New design strategies for forming high stability electrolytes: As was described before, the high electrochemical oxidative stability of magnesium electrolytes has been primarily enabled by the formation of strong Al-C, Al-N or B-C bonds (formed by the addition of appropriate Lewis acids). A very recent study by Carter et al. [35] targeted to increase the oxidative stability of the Mg(BH 4 ) 2 electrolytes by strengthening the B-H bond through forming 3-dimensional B-B bonds as in icosahedral boron clusters (closo-boranes). As such, the group exploited the high oxidative and thermal stability of closoboranes to prepare electrolytes with wide electrochemical stability window. The results demonstrated a novel carboranyl magnesium chloride electrolyte (1-(1,7-C 2 B 10 H 11 ) MgCl) that is compatible with magnesium metal, possesses high oxidative stability (3.3 V vs Mg), and to date, exhibits the lowest tendency to corrode non-noble metals observed from a chloride bearing electrolyte ( Figure 6). What was also notable is that the stable anion consisted of a magnesium Mg-C center as shown in Figure 6 below, indicating unique effects of the carborane scaffold. The cation was found to be the Mg 2 Cl 3 + observed before for other systems (see section 2.1.5). This was the first time to show that electrolytes with a wide electrochemical window could be prepared beyond known approaches that use Lewis base, Lewis acid systems. This work opens new horizons for designing highly stable magnesium battery electrolytes. Tetrahydrofuran-free electrolytes: Given the volatile nature of tetrahydrofuran (143 mm Hg at 20 °C, and 66 °C boiling temperature), it would be vital to discover electrolytes that are tetrahydrofuran-free. Aurbach et al. [22] demonstrated that they could utilize their organohalo-aluminate electrolytes in solvent mixtures of tetrahydrofuran and longer chain ethers such as tetragylmes without inducing losses in their electrochemical performances. However, it would be hard to fully eliminate the presence of tetrahydrofuran as the organohaloaluminates based on Grignard reagents tended to have a favorable performance in this solvent. Therefore, an important step in the development of the electrolytes would be demonstrating optimum performances in a tetrahydrofuran-free system. This may be enabled using electrolytes beyond those that use a Grignard Lewis base/Lewis acid reaction. In fact, highly reversible performance from a magnesium borohydride, lithium borohydride electrolyte, developed by Mohtadi et al. [27], was found in dimethoxyethane (monoglyme) solution. Actually, the magnesium borohydride had far superior electrochemical performance in monoglyme than that observed in tetrahydrofuran. Very recently, highly reversible performance (100% coulombic efficiency) for a similar borohydride electrolyte was demonstrated in diglyme solvent [28]. High cycling magnesium deposition/stripping efficiencies approaching 100% were reported for 0.35 M (HMDS) 2 Mg-2AlCl 3 in diglyme solution where high oxidative stability above 3.5 V vs Mg was obtained. Interestingly, the electrolyte stability measured on a stainless steel electrode was 0.4 V higher than that of a similar system in tetrahydrofuran (2.2 V vs Mg) [8]. At this time, all the electrolytes use ethereal solvents which are more or less volatile. An attractive choice for eliminating the safety hazards of ethers would be using ionic liquids due to their very low volatility. Reversible magnesium deposition/ dissolution from phenyl magnesium bromide [39] and alkylmagnesium bromide [40] was shown in ionic liquid solvents. The caveat was that tetrahydrofuran was used as a cosolvent and as discussed earlier, a shift from Grignard reagents is hence necessary to allow for more flexibility in the solvent selection. Nuli et al. [41] reported reversible magnesium plating using conventional salts such magnesium triflate (Mg(CF 3 SO 3 ) 2 ) in imidazolium-based ionic liquids. However, magnesium metal passivation was reported to take place [39,42]. On the electroactive species: In the case of typical organohalo-aluminate electrolytes, formed following the reaction between a Grignard reagent and AlCl 3 , it has been general-ly accepted that the magnesium charge carriers in the bulk are magnesium-chloride bonded ions existing as monomeric (MgCl + ) and/or dimeric (Mg 2 Cl 3 + ) species [43]. Kim et al. showed that Mg 2 Cl 3 + is one of the electroactive species present in 3:1 (HMDS)MgCl:AlCl 3 [24]. Studies on organoborates (crystallized out of their synthesis solution) suggested similar electroactive species as those in the organohalo-aluminates, i.e., MgR + and Mg(BR 4 ) + , R = alkyl or aryl [44]. Given that the organohalo-aluminate electrolytes are by far the most established, detailed studies exist which were concerned with identifying the nature of magnesium species, in both the bulk and at the interface of magnesium metal-electrolyte. As the organohalo-aluminates were reviewed extensively [10], the discussion here is focused on the most recent studies concerning these. The discovery of new types of magnesium ion electroactive species, which enable reversible magnesium plating, is important for advancing the research and development of magnesium battery electrolytes. Below, we shed light on the nature of the different species suggested for the new electrolytes per the available information. a. Grignard organohalo-aluminate systems: The nature of the electroactive species present at equilibrium in the bulk solution and at the magnesium metal-electrolyte interface during magnesium plating were studied previously [43,44]. For the Mg(AlCl 4−n R n ) 2 electrolyte, the presence of the adsorbed intermediate MgCl + ·5THF at the metal surface during the deposition of magnesium was suggested. More recently, the presence of an intermediate during magnesium deposition from a 1:2 molar RMgCl:R 2 AlCl/THF; R = C 2 H 5 , was observed by Arthur [45] and Benmayza et al. [46] using the magnesium K-edge in an in operando soft X-ray spectroscopy. Their results, combined with the transport properties of the magnesium species, also suggested the interfacial electroactive species to be MgCl + ·5THF. The dimeric Mg 2 Cl 3 + species present in the bulk, was discounted from being electrochemically active at the interface during magnesium deposition. Another important result was related to the measured low transport numbers of the magnesium ions. For example, the diffusion coefficient of the magnesium ionic species (i.e., Mg 2 Cl 3 + ) was very low (2.26 × 10 −7 cm 2 s −1 in 0.2 M solution which is 10 times lower than that observed in 1 M LiPF 6 -based electrolyte). Interestingly, the transference number t + , which determines the rate at which reversible magnesium deposition/stripping takes place, ranged between 0.018-0.19 at 0.40-0.15 M, respectively. This astonishing reduction in t + values with increasing the electrolyte concentration was attributed to lowered mobility of the dimeric magnesium ions and an increased number of counter and non-magnesium ions at high Lewis acid concentrations. This study helped to provide a better understanding of the electrochemical and transport properties in this complex system. b. Non-Grignard-based electrolyte systems: Indeed, the magnesium borohydride electrolytes offered new electroactive species beyond known monomeric, dimeric Mg-Cl and RMg + species, present in the organohalo-aluminate and probably in the organoborate electrolytes. Guided by spectroscopic analyses of the borohydride electrolytes, Mohtadi et al. [27] proposed the magnesium electroactive species to be a magnesium ion bridge bonded to a one BH 4 − , although the presence of magnesium ions that are solely coordinated to the solvent molecule was not discounted. The electrochemical performance was suggested to be governed by the extent of salt dissociation. The substantial improvements in the electrochemical performance as the dendicity of the solvent was increased, and following the addition of LiBH 4 , used as a source of a Lewis acid cation, further supported this hypothesis. In the case of the amidomagnesium (HMDS) based electrolyte [32], the dimeric Mg 2 Cl 3 + electroactive species was similar to that found in other Grignard-and non-Grignard-based haloaluminate systems [24,26]. A similar species was reported for the carboranyl MgCl electrolyte [35]. Note that Mg 2 Cl 3 + was also present in the crystallized products from MgCl 2 mixtures with aluminum based Lewis acids [34]. Based on the current progresses, we could summarize that two distinct species that enable reversible magnesium deposition/ stripping are known: 1) The Mg 2 Cl 3 + and/or MgCl + in organo/ non-organo haloaluminates, organoborates, and in the carboranyl electrolyte and 2) the MgBH 4 + in the borohydride electrolytes. For any of the electrolytes reported thus far, there is no evidence that support Mg 2+ presence. As described above, for future material design of magnesium battery electrolytes, it is of significant importance to discern the electroactive species in both the bulk and at the interface between the anode and electrolyte. This is expected to be more beneficial than solely relying on optimizing the compositions/ ratios of the reagents. Solid magnesium electrolytes As explained above, the solvents known to support optimum reversible Mg deposition/stripping are volatile as they are etherbased. To overcome this challenge, one strategy would be trapping the solvent, used to solvate the magnesium ions, within a polymeric matrix. The electrolyte formed in this case is referred to as a gel electrolyte. This concept was previously applied to Li-ion battery electrolytes [5] and was adopted later for rechargeable Mg batteries [10]. Nonetheless, demonstrating a viable gel electrolyte for rechargeable magnesium batteries is not trivial as it requires using magnesium reagents/salts that enable reversible magnesium deposition/stripping, while being chemically inert towards the polymeric matrix selected. The electrolyte would also need to have an acceptable conductivity of the magnesium ions at room temperature. Another strategy, which is far more challenging, is to create a solvent free solid state medium that enables magnesium ion conduction, under practical conditions, through magnesium ion diffusion, i.e., solid state magnesium salts. While few reports exist on the formation of gel electrolytes for magnesium batteries, reports on magnesium ion conduction in the solid state media are scarce. In fact, until recently, magnesium ion conduction at values in the order of 10 −3 mS cm −1 occurred only at temperatures exceeding 500 °C. A review of the developments related to both strategies, with focus on those that demonstrated viable electrolytes is presented below. Organic solid/semi solid electrolytes: The immobilization of magnesium electrolyte in polymeric matrices such as poly(vinylene difluoride) PVDF and poly(ethylene oxide) PEO, was reported by Chusid et al. [36]. The group impregnated magnesium organohalo-aluminate salts, such as Mg(AlCl 2 EtBu) 2 dissolved in tetrahydrofuran and tetraglyme, in both PEO and PVDF matrices. These complex solutions were found to be inert towards the polymers used and reversible magnesium deposition/stripping from these gel electrolytes was shown. The best electrochemical performance reported was for the Mg(AlCl 2 EtBu) 2 /tetraglyme/PVDF gel as high specific conductivity (3.7 mS cm −1 at 25 °C) was measured. This study not only showed the possibility to prepare gel electrolytes that are compatible with magnesium metal but also allowed for reversible Mg intercalation into the Chevrel phase Mo 6 S 8 cathode. Other gel polymer electrolytes were reported [47][48][49]. Examples include those incorporating dispersed inorganic oxides such as nano fumed silica. The oxides were added to improve the mechanical and electrochemical properties (1 mS cm −1 reported at room temperature) [49]. Unfortunately, all these gel electrolytes used magnesium salts known to be incompatible with the magnesium metal. A very recent study proposed using coordinatively unsaturated metal-organic frameworks (MOFs) as nano media to immobilize magnesium phenolate and/or Mg(TFSI) 2 /triglyme electrolytes (phenolates were found to be more soluble in triglyme than in tetrahydrofuran) [50]. As the phenolates were strongly interacting with the MOF's crystallites, addition of Mg(TFSI) 2 (i.e., weakly coordinating anion) was necessary to achieve a good conductivity (0.25 vs 0.0006 mS cm −1 in just phenolates/MOF). No results addressing the compatibility with magnesium metal or oxidative stability were provided. It may be possible that this system is incompatible with the magnesium metal due to the passivating nature of Mg(TFSI) 2 . Inorganic solid state magnesium ion conductor: Until very recently, the observation of magnesium ion conduction in inorganic salts occurred only at temperatures exceeding 500 °C [51,52]. Recently, Matsuo et al. [53] studied the possibility of magnesium ion conduction in the high temperature phase of magnesium borohydride using first-principles molecular dynamics simulations FPMD [53,54]. [37]. Guided by their first-principles calculations based on density functional theory (DFT), they experimentally investigated the conduction of magnesium ions in both Mg(BH 4 ) 2 and Mg(BH 4 )(NH 2 ). The selection of these compounds was motivated by the ionic bonding nature of the magnesium ions, judged from the calculated Bader charge on the magnesium, and presence of cavities large enough to enable magnesium ion conduction through the hopping mechanism. They measured a conductivity of about 10 −3 mS cm −1 at 150 °C for Mg(BH 4 )(NH 2 ), which is three orders of magnitude higher than that of Mg(BH 4 ) 2 , presumably due to the shorter distance between the two nearest Mg atoms (3.59 Å in Mg(BH 4 )(NH 2 ) vs 4.32 Å in Mg(BH 4 ) 2 ). In addition, reversible magnesium deposition/stripping was demonstrated for the Mg(BH 4 )(NH 2 ) electrolyte as shown in Figure 7. Interestingly, the oxidative stability of the Mg(BH 4 )(NH 2 ) salt was found to be in excess of 3 V vs Mg at 150 °C, which is higher than that reported for liquid Mg(BH 4 ) 2 -ether systems at room temperature [27]. The high ionic conductivity in Mg(BH 4 )(NH 2 ), albeit at 150 °C, reversible Mg deposition/stripping and high voltage stability provide opportunities for developing practical Mg solid state electrolytes based on novel borohydride salts. Perspectives on future developments of magnesium battery electrolytes Unlike in the the case of rechargeable lithium and sodium batteries, the development of electrolytes for rechargeable magnesium batteries has been faced with a distinct and an unavoidable challenge. This is thanks to the formation of a passivation layer upon magnesium metal exposure to numerous salts/solvents. Generally speaking, the battery system imposes several stringent requirements on the electrolytes as they represent the bridge linking the anode with the cathode. Not only they are required to highly perform in the proximity of two electrochemical environments operating at two opposite extremes, but also provide acceptable bulk transport properties that allow them to respond swiftly to the power demands of the system. Additionally, it is essential that the electrolytes have acceptable safety properties which include high thermal stability, low volatility, low flammability, low toxicity, and low reactivity with ambient air. Therefore, developing electrolytes possessing the aforementioned traits, no doubt, represents a key challenge. Since the first rechargeable magnesium battery was demonstrated in the early nineties, the R&D efforts have primarily focused on the creation of electrolytes that are highly compatible with the magnesium metal, followed by applying innovative strategies to improve other electrochemical properties. A main focus was increasing their stability against electrochemical oxidation, so that a competitive, high voltage battery system could be ultimately enabled. Over the past two decades, the technical advancements made on magnesium battery electrolytes resulted in state of the art systems that primarily consist of organohalo-aluminate complexes possessing electrochemical properties that rival those observed in lithium ion batteries. These are represented by a highly reversible performance, high bulk conductivity, and wide electrochemical windows. However, despite these scientific feats, these electrolytes had several drawbacks which include their corrosive properties, nucleophilicity (for those Grignard based), air sensitivity and the use of volatile solvents. Over the past two years, motivated by the desire to overcome the challenges with the known electrolytes, several new electrolytes that are compatible with the magnesium metal have been proposed. It is interesting to see that in previous and most recent electrolytes, the familiar monomeric and dimeric Mg-Cl active species were found. One important challenge with these same species is the slow transport properties. Another is the presence of chloride ions making them prime suspects in the corrosion issue. Hence, we believe that the discovery/design of new electroactive species is needed. Recent development in this direction is manifested in the borohydride electrolytes, where opportunities for increasing the oxidative stability are being explored and were demonstrated using closo-borane anions. Another common property among magnesium electrolytes is their air sensitivity. New approaches offered lowered sensitivity to air using alkoxides and phenolates. It would be interesting to determine their long term durability and see future designs that build on these systems, which are hopefully noncorrosive. In order to overcome the challenges with the liquid systems, solid electrolytes could be an ideal choice. The discovery of magnesium compatability and conduction in magnesium amide borohydride inspires confidence in this direction. Indeed, the portfolio of magnesium battery electrolytes has widened and we hope that the current research will fuel the next wave of innovations. This could be driven by further understanding of the properties of the electrolytes and their behavior in a battery system. Topics we suggest include: 1) Discerning the electroactive species and their interactions with both the magnesium metal and the cathode material. This may prove powerful in paving the path for designing modified electrolytes; 2) Determining important electrochemical transport properties in both the bulk and at the interface with the magnesium metal; 3) Understanding the extent of the air stability, thermal stability and long term durability of the electrolyte; 4) Understanding the effects of a battery environment on the electrochemical stability window. For example, examining the oxidative stability on the cathode material rather than solely using metals/glassy carbon electrodes; 5) Lastly, developing corrosion resistant substrates, such as pretreated surfaces, as this may be helpful in overcoming the corrosion issue. However, we think that this effort may be worthwhile when electrolytes become demonstrated with very competitive performances. Would future electrolytes help magnesium metal one day become the "ultimate battery anode"? There is no clear answer at this time. However, the numerous breakthroughs and scientific advancements made so far make one hopeful that at least it may have come one step closer. Rechargeable Mg battery cathode Much effort has been devoted to development of Mg batteries and their cathodes over the past 70 years. Some cathode materials have been practically investigated for a reserve-type Mg battery system, which were typically used together with Mg and Mg-Al-Zn (AZ) alloy as anode, and electrolytes based on either sea water or magnesium perchlorate (Mg(ClO 4 ) 2 ) solutions. The reserve battery requires high energy density, high power output, long lifetime and superior low temperature performance. Therefore, typical examples of cathodes for such Mg batteries, which have been summarized in the battery handbook so far were AgCl, CuCl, PdCl 2 , Cu 2 I 2 , CuSCN, MnO 2 and air [1]. These batteries could be operated as primary batteries which fulfilled the aforementioned requirements, however they could not be operated as secondary batteries enabling us to recharge them. One of the reasons considered for the nonrechargeability was the water passivation of the anode surface. As the metal was exposed to water, a blocking layer such as Mg(OH) 2 was formed accompanied with hydrogen gas generation. To recharge the battery, applying large overpotential was necessary due to the formation of highly resistive blocking layer, and finally the interface between anode and electrolyte, which determines the battery performances, could not be fully recovered. Due to the major hurdles with the anode, the challenges of Mg battery cathode may have been masked. Actually, a proper understanding of the cathode reaction and a further [55], Mg x MnSiO 4 [56][57][58], WSe [59], sulfur [24,60] and oxygen [61][62][63]. In order to discover the next generation Mg battery cathode, the most important challenges are overcoming the negative impact arising from divalent Mg 2+ ions and also maintaining higher mobility of Mg 2+ ions in the diffusion pathway. So far, despite the research efforts to overcome these challenges, the very slow diffusion of Mg 2+ ions and the structural instability remain as key hurdles in the development of working high voltage cathodes. Here, we strongly focus on the recent progresses of representative cathode materials for rechargeable Mg batteries. Cobalt-based cathode materials Since the 1990s, a variety of non-aqueous electrolytes have been adopted to evaluate and improve the rechargeability of the battery. It was believed that the non-aqueous electrolytes consisting of either magnesium perchlorate in acetonitrile solvent or magnesium organoborate in tetrahydrofuran solvent were capable of overcoming the issue resulting from water-containing systems. Gregory et al. surveyed some candidates among many cathode materials by chemical intercalation experiments using typical electrochemical methods [12]. Based on XPS analyses it was reported that ZrS 2 as a host material was able to receive Mg. Also, they proposed that RuO 2 and Co 3 O 4 were hopeful candidates expected to capture Mg ions. These materials were also studied by Sutto et al. to demonstrate the redox capability in a different non-aqueous electrolyte system [64,65]. According to their discussion, Co 3 O 4 did not allow for a sufficient magnesium insertion because of i) strong interactions between Mg 2+ cations and oxygen atoms in the host lattice and ii) a drastic change of host structure and particle size after magnesiation. The initial capacity of 74 mAh g −1 was observed at around 1. Figure 8. One reason for this increase could be resulting from an enhanced Mg 2+ ion diffusion compared to the ordered structure. The disordered spinel Mg 0.67 Ni 1.33 O 2 also showed high OCV based on the same principle [66]. Unfortunately, the high initial voltage (over 3.0 V vs Mg) observed for these spinel materials could not be maintained during the rest time following charging as a continuous voltage decay was observed. This meant that these cathode materials possessed high polarization due to slow diffusion of Mg 2+ in the host lattice. Thus, even for these materials it was not possible to discharge the battery at a higher voltage over 3.0 V vs Mg. Although these cathodes did not enable stable high voltage performance, the introduction of the disordered structure into the Mg battery cathode is indeed a good idea to neutralize the local charge density occurring between inserted Mg 2+ ions and the host lattice, and to furthermore accelerate intrinsic Mg 2+ ion diffusion. proposed a water-containing V 2 O 5 cathode system in an organic electrolyte such as Mg(ClO 4 ) 2 in propylene carbonate [68,69]. It was expected that V 2 O 5 was capable of accommodating 2 mol of Mg 2+ ions, which is equivalent to the V 5+ /V 3+ redox reaction. However, according to their report, the electrochemical insertion of Mg 2+ ion into V 2 O 5 was dependent on the amount of water in the electrolyte. Additionally, the maximum content of Mg 2+ ions was observed to be less than 0.6 mol. This is because chemically bounded water was present in the channel of the V 2 O 5 host which prevented further magnesiation. Although the observed capacity was much lower than the expected value, hydration of Mg 2+ ions is expected to mitigate difficulty of their electrochemical insertion into the host lattice as explained in a previous review [69]. Imamura (Figure 9b). Nano-sized V 2 O 5 , which had a particle size distribution of 20-50 nm, brought about a higher discharge capacity and a narrower hysteresis with a higher working voltage than micron-sized V 2 O 5 . In both Imamura's and Amatucci's approaches, using a thin layer and nanoparticles allowed for a short diffusion length of Mg 2+ ions, thereby improving the Mg battery performance. Although the possibility of H 2 O associated intercalation for V 2 O 5 has only been suggested, these approaches could be eventually one of the important ways that help accelerate Mg 2+ ion diffusion in the lattice. Molybdenum-based cathodes A big success in developing a cathode for Mg batteries was presented in 2000 by Aurbach et al. They discovered an excellent material, the Chevrel phase (CP) Mo 6 S 8 , as a rechargeable Mg battery cathode [20]. The CP cathode was proven to have a very stable performance with less than 15% capacity fade over 2000 cycles at 100% depth of discharge. Note that practical rates of 0.1-1.0 mA/cm 2 and wide temperature ranges from −20 to +80 °C were used. As described in previous articles [7,20,73], these promising properties are enabled by the following features of the CP cathode; 1) electroneutrality derived from delocalized Mo 6 metallic cluster, 2) plenty of sites per cluster where Mg 2+ ions can be accommodated for solid- state diffusion, 3) high electronic conductivity. One of the drawbacks of the CP cathode is that the kinetics of Mg 2+ ion diffusion was strongly dependent on the composition and operating temperature [23]. During initial magnesiation, 20-25% of Mg 2+ ions were trapped in the CP lattice and were not extracted unless the temperature was elevated. Moreover, when the CP cathode was tested at a low temperature around 15 °C, the capacity of about 80 mAh g −1 observed at 1/10 C rate, decreased to about 40 mAh g −1 at 1 C rate. An effective countermeasure to promote fast kinetics of the CP cathode was the partial substitution of Mo 6 S 8 by Se. It was observed that the Se-substituted CP cathode showed an excellent accessibility of Mg 2+ ions, resulting in a higher capacity at higher rate and at lower temperature. Unfortunately, the CP cathode families showed relatively low working voltages at around 1.2 V vs Mg and relatively low capacities of around 110 mAh g −1 . To make the magnesium battery more practical, a Mg cathode with high energy density is strongly desired. Recently, it was reported that graphene-like MoS 2 also worked as a Mg battery cathode [74][75][76] (Figure 10a). Chen et al. found that this material exhibited an operating voltage of 1.8 V and a reversible capacity of about 170 mAh g −1 by combining with Mg nanoparticles as an anode. Additionally, in a similar way, TiS 2 has been considered as a common cathode material even in Mg batteries [77] ( Figure 10b). The operating voltage of TiS 2 was not high enough compared with Mo 6 S 8 , and it suffered from limited rate and temperature performances. However, a higher capacity of about 180 mAh g −1 vs Mg was obtained by the state of art nanotechnology. Therefore, transition metal sulfides as prototypical intercalation host materials may bring in a new breakthrough for Mg battery cathodes. Manganese-based cathodes Finally, the remaining attractive materials as Mg battery cathode was MnO 2 and its polymorph [78][79][80][81][82]. MnO 2 is widely regarded as a common cathode material in primary batteries including either Zn or Mg anodes, in lithium-ion secondary batteries and furthermore in metal-air batteries. The unique MnO 2 polymorphs have been used as Mg battery cathodes coupled with either a magnesium organohaloaluminate electrolyte solution or magnesium perchlorate non-aqueous electrolyte solution. In 2011, Zhang et al. presented the redox capability of α-MnO 2 during magnesiation and demagnesiation [81] ( Figure 11a). α-MnO 2 with 2×2 tunnel structure showed a high initial capacity of about 240 mAh g −1 and could be repeatedly discharged and recharged. Unfortunately, this cathode had severe capacity fading due to a drastic structural deformation from the tetragonal phase to the orthorhombic phase during magnesiation. While this is known to occur in all manganesebased cathodes for lithium-ion batteries, such structural instability during magnesiation is thought to be a key trigger that severely deteriorates α-MnO 2 in non-aqueous Mg batteries. Very recently, Ling et al. proposed an alternative manganese material for a Mg battery cathode, which was called a postspinel compound, MgMn 2 O 4 with 2×2×1×1 structure [82] ( Figure 11b). Theoretical calculations predicted that the abovedescribed structural stability was significantly improved by controlling the tunnel size and shape for Mg 2+ ion diffusion. As a result, the post-spinel compound facilitated Mg insertion/ extraction reaction more than α-MnO 2 with bigger tunnel size, and then had a relatively high operation voltage. In addition, the cooperative hopping of Mg 2+ ions in the tunnel 2×2×1×1 was estimated to aid faster diffusion, resulting in a low diffusion barrier (≈400 meV) that is comparable with that for LiMn 2 O 4 , a typical lithium-ion battery cathode. Thus, the structural modification for Mg 2+ ion diffusion is also one of the approaches which could be used to achieve fast kinetics in the cathode and minimize the interactions of strongly bounded Mg 2+ ions with host tunnels. Perspectives on rechargeable cathodes In order to establish the non-aqueous Mg battery as a system, it is noteworthy that the cathode strongly governs the battery performance. High energy density of the cathode is an indispensable requirement for Mg batteries to become a reality. To realize this, two approaches can be generally followed; one is having a high voltage operation, while the other is having a high capacity operation. In the latter, despite the fact that either sulfur or oxygen cathodes have to be truly demonstrated for Mg battery, they may offer potential high capacity future cathodes. However, it is expected that these cathodes would have challenges similar to those encountered in the Li battery system. An important point would be whether the typical issues present in the Li-air and Li-sulfur systems could be solved using the Mg system. Herein, we focus on the cathode materials for high voltage operation. Generally, oxide-based materials should be suitable to meet such a request, since oxide-based materials are theoretically able to show higher redox potential as demonstrated in lithium-ion batteries. However, in terms of the Mg 2+ ion mobility, oxide materials currently have the issue of sluggish diffusion. This resulted in an overall battery performance that was not so promising compared to that using the sulfide materials. To the best of our knowledge, there have been a couple of key solutions to overcome this undesirable situation. First is to discover an appropriate host structure with faster kinetics as was already discussed above. From the viewpoint of the guest ion size relative to the host structure, Mg 2+ as a guest ion does not have an issue because the ionic radius of Mg 2+ (0.74 Å) is close to that of Li + (0.68 Å). However, when considering the interaction between the guest ion and the host structure, the divalent nature of Mg 2+ ions notably suppresses fast diffusion observed in the monovalent Li + system, because of the: 1) tightly bounded attraction between Mg 2+ and the host and 2) strong repulsion between Mg 2+ ions. As a result, sluggish diffusion of Mg 2+ ions causes poor magnesiation and non-dynamic situation which means that mobile ions are stuck either in the diffusion pathway or on the surface. Structural designs that promote Mg 2+ ion diffusion is thought to be the best way to discover promising Mg battery cathodes operating at high voltage. Another key challenge is how to control the charge transfer resistance observed at the cathode/electrolyte interface, which should become more apparent after the sluggish diffusion issue is fully overcome. In fact, the charge transfer resistance has been carefully studied in lithium-ion batteries and was found to have a significant role that determines the battery performance. A surface film formed on a cathode active material should allow for transporting Mg 2+ ions, but sometimes it may act as a blocking layer, thus hampering the charge transport. Even though the solvated Mg 2+ ions can go through the surface film, a desolvation process needs to take place before the ions could migrate inside a host structure. Probably, Mg 2+ ions would have a stronger solvation than Li + ions, therefore the charge transfer resistance is expected to be considerably higher. In a practical setup, this is an important factor necessary to promote further magnesiation. As it has been the case for state of art technologies such as those in lithium-ion batteries, Mg battery electrolytes will also need to be optimized for such high voltage operation of the cathode. The cathode/electrolyte interface will have to be considered so as we do not lose the superior electrochemical properties especially those for the high voltage cathodes. Finally, for high voltage battery operation, close attention needs to be paid to the corrosion of the current collectors. In any environment having an electrolyte salt (e.g., magnesium perchlorate and magnesium organoborate) dissolved in an organic solvent, the corrosion of current collectors must be suppressed in order to properly monitor the cathode properties in a battery setup. In particular, high voltage systems need to be understood in a suitable electrolyte environment with wide electrochemical window. Further progresses in Mg battery cathodes are needed and should go hand-in-hand with the developments of noncorrosive and electrochemically stable electrolytes. Conclusion Indeed, current state of the art rechargeable magnesium battery technologies are far from reaching its promised potential, where several hurdles, particularly resulting from the absence of appropriate electrolytes and high capacity/voltage robust cathodes remain. Nonetheless, we are hopeful that an improved understanding of the chemistry/physics of these batteries and future innovative ideas may after all allow for battery engineering and system optimization per application needs. This may enable commercialization of these batteries, sooner or later.
2016-01-09T01:06:55.066Z
2014-08-18T00:00:00.000
{ "year": 2014, "sha1": "4264c3b8adf3fe797be564c142c6b2ba0fe94865", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-5-143.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4264c3b8adf3fe797be564c142c6b2ba0fe94865", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Medicine" ] }
267029002
pes2o/s2orc
v3-fos-license
The Lnc‐ENST00000602558/IGF1 axis as a predictor of response to treatment with tripterygium glycosides in rheumatoid arthritis patients Abstract Aims Growing clinical evidence suggests that not all patients with rheumatoid arthritis (RA) benefit to the same extent by treatment with tripterygium glycoside (TG), which highlights the need to identify RA‐related genes that can be used to predict drug responses. In addition, single genes as markers of RA are not sufficiently accurate for use as predictors. Therefore, there is a need to identify paired expression genes that can serve as biomarkers for predicting the therapeutic effects of TG tablets in RA. Methods A total of 17 pairs of co‐expressed genes were identified as candidates for predicting an RA patient's response to TG therapy, and genes involved in the Lnc‐ENST00000602558/GF1 axis were selected for that purpose. A partial‐least‐squares (PLS)‐based model was constructed based on the expression levels of Lnc‐ENST00000602558/IGF1 in peripheral blood. The model showed high efficiency for predicting an RA patient's response to TG tablets. Results Our data confirmed that genes co‐expressed in the Lnc‐ENST00000602558/IGF1 axis mediate the efficacy of TG in RA treatment, reduce tumor necrosis factor‐α induced IGF1 expression, and decrease the inflammatory response of MH7a cells. Conclusion We found that genes expressed in the Lnc‐ENST00000602558/IGF1 axis may be useful for identifying RA patients who will not respond to TG treatment. Our findings provide a rationale for the individualized treatment of RA in clinical settings. numerous gene expression profiles, have played an important role in identifying lncRNA and RA-related gene interactions, as well predicting disease behavior and a patient's response to treatment. 7,21,22Although gene expression microarray assays have the advantages of high throughput and high sensitivity, those assays do not fully elucidate the entire biological process that regulates RA.In addition, differences in sample size and quality can cause inconsistencies among the numerous differential transcripts identified by microarrays. 8,10To address these issues, we performed a molecular network analysis that integrated the high-throughput benefits of gene expression profiling with our knowledge of drug-disease interactions.Next, the differential expression of paired genes was assessed, and important regulators were identified according to the differential expression patterns and network topological features.Subsequently, a partial least squares (PLS) model based on the expression of paired genes was constructed to predict the therapeutic effect of TG treatment.Lastly, the function of filtered paired-expressed genes, which mediate how TG functions in RA treatment was further confirmed in rheumatoid arthritis synovial fibroblast (MH7a) cells. | Patients From October 2018 to October 2019, a total of 52 RA patients were recruited at a hospital to participate Division of Rheumatology of Guang'anmen Hospital.The 52 RA patients were assigned to two main groups, which each had two subgroups: discovery group (n = 12, 6 responders and 6 nonresponders) and validation group (n = 40, 25 responders and 15 nonresponders).The Affymetrix EG 1.0 Array genome-wide expression profile assay was used to detect lncRNAs and mRNAs that were differentially expressed in peripheral blood mononuclear cells (PBMCs) and were related to the efficacy of TG tablets in the former subgroups.The latter groups were used to validate the expression of candidates by quantitative real-time polymerase chain reaction (RT-qPCR) assays. Inclusion criteria of the RA patients based on the 2020 ACR/European League Against Rheumatism (EU-LAR) Criteria or the American College of Rheumatology (ACR) 1987 criteria for RA. 23In brief, (1) a symptom duration of less than 1 year; (2) RA patients with a symptom last less than a year or treatment of TG tables less than 4 weeks were not used; and (3) availability of clinical and laboratory parameters at initiation of TG tablets and after 12 weeks, as well as availability of peripheral blood samples.The inclusion criteria used for RA patients were based on either the 2020 ACR/EULAR Criteria or the ACR 1987 criteria for RA.All the RA patients were between 18 and 70 years old, and RA patients with a symptom lasting > 1 year or whose symptoms had been treated with TG tablets for < 4 weeks were excluded from the study.The patients received TG tablets (20 mg, oral administration, three times per day) on a continuous basis for 12 weeks.Patients who responded to treatment were defined as those who achieved ACR 20, otherwise, they were classified as nonresponders. | PLS model A PLS model was constructed based on gene expression in the peripheral blood of RA patients, and a fivefold crossvalidation was used to evaluate the model's performance.Sequentially maximizing the covariance between the linear combination of predictors and the response variable served as the objective standard for constructing components in the model.In the matrix, P represents candidate genes, and N represents cases.In addition, Y represents the N×1 vector of response values (the indicator of responders or nonresponders).The components were constructed by maximizing the objective criterion based on the sample covariance.Thus, the weight vector W was satisfied with the following objective criterion.W = argmaxcov (X , Y). w Subsequently, the weight coefficients of genes were calculated using a training data set.The candidates in the PLS model are indicated as: P = {Pi}, i = 1, 2, 3. The PLS model score (S) is defined as: Pi Pi L Pi represents the candidates' expression pi in the RA patients. Subsequently, the training data set was input into the PLS model, and the cutoff value for which the largest area under the receiver operating characteristic (ROC) curve (AUC) was used to calculate the threshold value (T) of the AUC score.The PLS classifier was used to predict a responder when the score was > T. For the fivefold cross-validation, the candidates in the discovery cohort were classified into two groups: a training data set and testing data set, respectively.A fivefold crossvalidation was performed due to the small sample size of the discovery cohort.The mean accuracy, sensitivity, specificity, and AUC values determined by the ROC curves were calculated using the following formulas: TP, TN, FP, and FN refer to the numbers of true positive, true negative, false positive, and false negative result components in a test, while N refers to the total number of predictive samples. | Gene expression profiling Samples of peripheral blood (PB) were collected from RA patients treated with TG tables (n = 12).Density gradient centrifugation was used to isolate the PBMCs, which were subsequently washed in sterile phosphate-buffered saline (PBS).Next, the total RNA in PBMCs was extracted and eluted with 15 μL of RNase-free water.The gene expression profiles (include mRNAs and lncRNAs) of the responders and non-responders to TG tablets were respectively detected using an Affymetrix EG1.0 array system.A total of 57 reliable RA targets were collected from the database DRUGBANK.mRNAs that were differentially expressed between responders and non-responders were identified by using the criteria: log 2 fold-change and a p-value < .05,as determined by the unpaired student's t-test.The Database Visualization and Integrated Discovery software package (DAVID; http:// david.abcc.ncifcrf.gov/home.jsp,version 6.7) was used to analyze gene function via pathway enrichment.Pathway data was obtained from the Kyoto Encyclopedia of Genes (KEGG; http://www.genome.jp/kegg). | Gene signal transduction network analysis Reliable gene-gene interaction data were obtained from the public database STRING (Search Tool for Known and Predicted Protein Interactions; version 10.0, http://stringdb.org),and the aggregate evidence score was higher than the median of all scores.In our network, nodes refer to genes that were differently expressed between the responder and nonresponder populations, and the edges refer to the interaction between nodes.To determine biomarkers of candidate genes for TG tablets, we assessed the topological importance of each node by calculating the following four topological characteristics: (1) Node degree: the sum of the advantages of connecting nodes to me with other genes, which measures the related genes in relation to other gene networks; (2) internode: the importance of a node relative to other nodes in the network; and (3) node intimacy: the time required for the measurement information to propagate from node I to all other nodes in turn.The greater the degree/intermediate degree/intimacy of a node, the more important that node is in the network. | Enzyme-linked immunosorbent assay (ELISA) assays 5][26] In brief, MH7A cells were transfected with the indicated short hairpin RNA (shRNA) or small interfering RNA (siRNA) for 24 h, and subsequently treated with TNF-α combined with different concentrations of TG for another 24 h.After treatment, the culture supernatants were collected.Then, 100 μL supernatants were added into the ELISA plates and incubated for 1 h at room temperature.After incubation, the plates were washed for three times by PBS.Then, the horseradish peroxidase-conjointed second antibody was incubated with plates for 1 h at connected.Finally, the TMB was added to test the IL-1β, IL-6, and TNF-α levels.The levels of IL-1β, IL-6, and TNF-α in the culture supernatants were assessed by ELISA assays.ELISA kits for IL-1β (201-LB), IL-6 (D6050), and TNF-α (210-TA) were purchased from R&D Systems. | Cells transfection A pLKO.1-puro plasmid (GenePharma) was used to construct shRNA for Lnc-ENST00000602558 knockdown, and the corresponding empty vector served as a negative control.A si-IGF1 and control si-RNA were purchased from hanghai GenePharma Co., Ltd.The transient transfection of shRNA or siRNA (both from Shanghai GenePharma Co., Ltd.) was performed in six-well (or 96well) plates using Lipofectamine® 3000 (Invitrogen; Thermo Fisher Scientific, Inc.) according to the manufacturer's protocols.Briefly, MH7A cells were seeded in a six-well (or 96-well) plates at a concentration of 2 × 10 5 (2 × 10 3 ) cells/well, and shRNA or siRNA transfection was performed when the cells grew to 40%-60% confluence.All molecules were transfected at a concentration of 50 nM.After transfection, the culture plate was placed in a constant temperature incubator at 37°C and 5% CO 2 for 48 h. | Fluorescence in situ hybridization (FISH) MH7A cells were seeded into 12-well slides and cultured in a DMEM medium containing 10% FBS.When the cells reached 80% confluence, they were treated with TNF-α or PBS (Ctrl) combined or not combined with TG for 1 h.Next, the treated cells were fixed with 4% paraformaldehyde, blocked with PBS containing 5% goat serum, and permeabilized with 0.2% Triton X-100/PBS for 30 min.The cells were then hybridized with a cy3labled Lnc-ENST00000602558 probe (GenePharma) overnight.The next morning, the cells were washed with 20×SSC eluant (Servicebio) and then incubated with IGF1 antibody (Abcam, ab106836, 1:500) overnight at 4°C.The next morning, the slides were re-warmed for 1 h at room temperature, and then incubated with a secondary conjugated fluorescent dye (Abcam, ab150129, 1:1000) at 37°for 1 h.After counterstaining with DAPI (Abcam, ab228549, 1:5000), images were captured with a Nikon Eclipse TS100 microscope equipped with Micro-Manager 1.4.22 acquisition software. | RNA pulldown assay The cells were homogenized in 200 μL of cell lysis buffer (10 mM Tris-HCl, pH = 8.0, 10 mM NaCl, 3 mM MgCl 2 , 0.5% NP-40, and a protease inhibitor cocktail); after which, the cell lysates were immune-precipitated by using an RNA 3′ End Desthiobiotinylation Kit (Pierce Biotechnology).Protein-A agarose beads (Invitrogen, 101006) were used to isolate the immunoprecipitates, which were subsequently reverse-transcribed and then digested with protease K overnight at 65°C.The immuneprecipitated protein and input protein were examined by western blot analysis. 27 | Statistical analysis All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 19.0 software (IBM Corp.).Quantitative data are expressed as a mean value ± standard error of mean of results obtained from three independent experiments.Each experiment was performed using at least six samples per group.The twotailed, unpaired student's t-test was used for pair-wise comparisons of genotypes or treatments.One-way analysis of variance (ANOVA) was used for comparing three or more groups, as indicated in the figure legends and elsewhere.To assess statistical significance, data from three or more independent experiments were analyzed by using Tukey's post hoc test with a confidence interval of 95% after ANOVA. | Identification of differentially expressed LncRNAs and mRNAs that responded to TG treatment To identify differentially expressed gene pairs related to the response to TG treatment of RA patients, we first compared the lncRNA expression profiles of RA patients who responded and did not respond to TG treatment (six patients per group).After examining a total of 1592 lncRNAs; among which 782 were upregulated and 810 downregulated, we found a significant difference in the differential expression of lncRNAs in the two groups (Figure 1A).Next, the profiles of the differentially expressed lncRNAs revealed distinctive patterns for responders and nonresponders to TG treatment as determined by unsupervised hierarchical clustering (Figure 1B).Similarly, as shown in heat maps (Figure 1C), we screened 212 mRNAs that showed significant expression characteristics related to the response to TG treatment of RA patients.Subsequently, a KEGG pathway enrichment analysis of the differentially expressed mRNAs was performed.Those results showed that the changed genes were significantly enriched in biological processes and pathways related to glycosaminoglycan biosynthesis and PI3K-Akt signaling, suggesting that genes associated with a patient's response to TG treatment were also associated with the immune response (Figure 1D). | Identification of paired expressed genes based on the discovery cohort A Pearson analysis of lncRNA/mRNA pairs was preformed to identify candidate genes that predicted a patient's response to TG treatment.The correlation analysis identified 422 and 451 pairs of lncRNA/mRNA pairs in the effective and poor efficacy groups, respectively.In addition, 28 pairs of lncRNA-mRNAs that showed a significant correlation (p < .05)were further screened in both the effective group and poor efficacy group.According to the database of DRUGBANK, the 57 differentially expressed genes were subsequently selected from the DrugBank database.The selected candidate genes were functionally involved into the signal pathways associated with major pathological events during RA progression, such as "inflammatory cell infiltration", "inflammation," "synovial pannus formation," "angiogenesis," "joint destruction," and "bone resorption," as well as "drug metabolism."Considering their significantly differential expression patterns, great network topological importance, and functional relevance to RA, we selected these genes as the candidate gene biomarkers (Figure 2A).Subsequently, a gene signal transduction network consisting of 17 pair-expressed genes was constructed. Following the calculation, Lnc-ENST00000602558/IGF1 was identified as the most highly correlated coexpression gene pair in the genegene interaction network associated with response to TG treatment (Figure 2B). After identifying Lnc-ENST00000602558/IGF1 as a candidate, reverse transcriptional quantitative PCR analysis was used to validate the microarray data by using the validation cohort of 40 patients, which consisted of 25 patients who responded and 15 patients who did not respond to TG treatment.Consistent with the result of microarray data, the levels of IGF1 expression in peripheral blood obtained from the responders were significantly lower than those in peripheral blood obtained from the non-responders, as determined by using two internal references, S18 and GAPDH (Figure 2C).In addition, the levels of Lnc-ENST00000602558 expression were markedly higher in the peripheral blood of responders than in the peripheral blood of non-responders, which was also consistent with the microarray data (Figure 2D) (Table 1). | The PLS-based model efficiently predicted a patient's response to TG To further evaluate the role of the ENST00000602558/ IGF1 axis in predicting a patient's response to TG treatment, a partial least square (PLS)-based model that used the expression levels of ENST00000602558 and IGF1 in peripheral blood was constructed.The performance of the model was validated by testing the levels of ENST00000602558 and IGF1 expression in samples of RA peripheral blood (normalized by 18s and GAPDH, respectively).The weight values of ENST00000602558 and IGF were first determined.When using 18s as the internal reference, the weight values of ENST00000602558 and IGF1 were −0.8251 and 0.5650, respectively, and the threshold value was −0.07.In addition, the weight values of ENST00000602558 and IGF1 were −0.7780 and 0.6282, respectively, and the threshold value was −0.035, when GAPDH served as the internal reference (Table 2).Subsequently, the high reliability and accuracy of the PLS model using the ENST00000602558/IGF1 axis was further confirmed by comparison with PLS models that used IGF1 or lncRNA expression alone.The result showed that the accuracy and AUC values of the PLS-based model based on ENST00000602558 or IGF1 expression were 84% and 67.7%, respectively, when 18s served as the internal control.This was consistent with the PLS-based model that used GAPDH as an internal control (the ENST00000602558 and IGF1 accuracy and AUC values were 94% and 75.9%, respectively).(Figure 3 and used both GAPDH and RPS18 as internal controls.In summary, the reliability and efficacy of the PLS model that incorporated both IGF1 and lncRNA expression were significantly higher than those of the PLS model, which used either IGF1 or lncRNA alone. | Treatment with TG increased Lnc-ENST00000602558 expression and decreased IGF1 expression in TNF-ainduced MH7A cells In an attempt to understand the effect of TG on IGF1 expression during RA development.MH7A cells were examined for their levels of IGF1 expression.MH7A cells were treated with different concentrations of TG for 24 h to evaluate the toxicity of the drug.Our data showed that a TG concentration < 100 µg/mL had little influence on cell survival (Figure 4A).To investigate the effect of TG on Lnc-ENST00000602558 and IGF1 expression, the MH7A cells were treated with TNF-α for 24 h to mimic a cell rheumatoid environment; after which, the cells were incubated with 25 μg mL (low dose, TG-L), 50 μg/mL (medium dose, TG-M) or 100 μg/mL of TG (high dose, TG-H) for an additional 24 h.Subsequent RT-qPCR analyses showed that treatment with TNF-α alone significantly reduced Lnc-ENST00000602558 expression. In contrast, the levels of IGF1, which plays an important role in RA development, were markedly increased in MH7A cells treated with TNF-α.However, TG reversed the TNF-α-induced decrease in Lnc-ENST00000602558 levels and increased the levels of IGF1 in a dosedependent manner (Figure 4B).Additionally, a western blot analysis showed that TG reduced the increase in IGF1 protein levels induced by TNF-α (Figure 4C).An increased IGF1 level is reported to cause the secretion of pro-inflammatory factors.In line with that premise, the levels of pro-inflammatory factors in the supernatants of differently treated MH7A cells were examined by ELISA. The results showed that TG markedly reduced the levels of IL-1β, IL-6, and TNF-α in the cell supernatants (Figure 4D,E).In summary, the results suggested that TG could regulate Lnc-ENST00000602558/IGF1 expression and decrease TNF-α-induced inflammation in MH7A cells. | TG decreased IGF1 expression in an Lnc-ENST00000602558-dependent manner To determine whether Lnc-ENST00000602558 is necessary for TG-mediated IGF1 expression, 3 pairs of shRNAs were designed and used to knockdown Lnc-ENST00000602558 expression in MH7A cells.RT-qPCR analyses showed that the sh-LncRNA-3 effectively decreased Lnc-ENST00000602558 expression in the cells, and was used for subsequent studies (Figure 5A).MH7A cells were transfected with sh-NC or sh-LncRNA and then cultured for 24 h.Next, the MH7A cells were treated with different concentrations of TG, followed by incubation with TNF-α.We found that treatment of the cells with TNF-α decreased Lnc-ENST00000602558 expression; however, that effect was attenuated when the cells were treated with TG.In addition, decreased expression of Lnc-ENST00000602558 caused by sh-LncRNA was also noted in MH7A cells treated with TNF-α and TG (Figure 5B).Similarly, IGF1 expression was increased in MH7A cells treated with TNF-α, and then became decreased after treatment with TG.As hypothesized, sh-LncRNA significantly attenuated the TG-induced decrease in IGF1 expression at both the mRNA and protein level in MH7A cells treated with TNF-α (Figures 5C,D).To further examine the effect of Lnc-ENST00000602558 expression on the function of TG, the levels of secretory pro-inflammatory factor levels secreted from MH7A cells were analyzed by ELISA.Those results showed that the levels of IL-1β, IL-6, and TNF-α expression in the supernatants of MH7A cells treated with TNF-α were significantly higher the those in the supernatants of control cells.However, the levels of all those cytokines were significantly decreased following TG treatment.Consistent with the changes in IGF1 expression, ELISA results revealed that the concentrations of IL-1β, IL-6, and TNF-α in the culture media of the sh-RNA groups were significantly higher than those in the culture media of the sh-NC groups treated with TNF-α and TG (Figure 5E).Collectively, these results suggested that the effect of TG on the TNF-a-induced rheumatism cell model was impeded by Lnc-ENST00000602558, as indicated increased IGF1 levels and the induced production of pro-inflammatory cytokines. | Lnc-ENST00000602558 directly regulated IGF1 expression To determine the underlying mechanism of Lnc-ENST00000602558 in IGF1-induced secretion of proinflammatory factors in the RA cell model, IGF1 expression was systematically knocked down by using 3 pairs IGF1 specific siRNAs.RT-qPCR results confirmed that IGF1 mRNA was expressed at low levels in the MH7A cells transfected with si-IGF1-1, and thus si-IGF-1 was used for subsequent studies (Figure 6A).Similarly, it was demonstrated that TG decreased IGF1 expression in MH7A cells treated with TNF-α; however, knockdown of Lnc-ENST00000602558 expression notably attenuated the effect of TG on IGF1 expression.Moreover, the levels of IGF1 mRNA and protein were significantly decreased in MH7A cells that had been treated with TNF-α and TG and then transfected with si-IGF1 and sh-Lnc-ENST00000602558 (Figures 6B,C).In addition, the levels of inflammatory cytokines were analyzed and the results revealed that knockdown of Lnc-ENST00000602558 expression in MH7A cells significantly restored the levels of IL-1β, IL-6, and TNF-α following treatment with TNF-α and TG.However, transfection with si-IGF1 significantly decreased IGF1 expression in MH7A cells that had been treated with TNF-α and TG and then transfected with sh-LncRNA (Figure 6D).Therefore, these results suggested that Lnc-ENST00000602558 suppressed the secretion of inflammatory cytokines from MH7A cells by regulating IGF1 expression after the cells had been treated with TNF-α and TG. A previous study reported that lnc-RNAs regulates gene expression by directly interacting with their target proteins (23).Our data proved that Lnc-ENST00000602558 interrupted TNF-α-induced IGF1 expression at the transcriptional level.To further confirm whether Lnc-ENST00000602558 induced IGF1 expression via direct interaction with IGF1, a fluorescence in situ hybridization (FISH) analysis was performed.The FISH results showed that the fluorescence intensity of Lnc-ENST00000602558 almost disappeared after TNF-α treatment.In contrast, treatment with TG restored the fluorescence intensity of Lnc-ENST00000602558 when compared to cells treated with TNF-α.In addition, when compared with the TNF-α treatment group, the fluorescence intensity of IGF1 was decreased in MH7A cells treated with TNF-α and TG, and redistribution of IGF1 expression was especially profuse in the Lnc-ENST00000602558 regions, suggesting that Lnc-ENST00000602558 might directly interact with IGF1 and thereby regulate that protein's stability (Figure 7A).RNA pull down assays also confirmed that Lnc-ENST00000602558 was able to bind with the IGF1 protein in TNF-α-induced MH7A cells, and TG treatment diminished the amount of IGF1 protein that bound to Lnc-ENST00000602558 (Figure 7B).These findings, when combined with the abnormal secretion of pro-inflammatory factors observed in TG-and sh-RNA-treated MH7A cells, indicated that Lnc-ENST00000602558 participates in the therapeutic effect of TG, at least partially, by directly interacting with IGF1 protein. | DISCUSSION TG has been widely used to treat RA, 28,29 and has attracted extensive attention due to its strong efficacy. 15,30,31However, not all RA patients benefit to the same degree from TG treatment. 14,32. wilfordii (TwHF) preparations affect various human systems, including the digestive, reproductive, and cardiovascular systems.The most common manifestations of side effects by TwHF preparation is the digestive system.Oral taken of TwHF caused local irritation of the gastrointestinal mucosa and some mild symptoms, such as dry mouth, fatigue, and loss of appetite.33 TwHF can also induce rash, skin pigmentation, skin itching, oral ulcers, and hair loss in skin and its accessories.34,35 Besides, chronic treatment of TwHF also influence the reproductive systems, [36][37][38] hepatotoxicity, 39,40 the blood and hematopoietic system, 41,42 and cardiovascular system.43 TwHF also causes dizziness, lethargy, insomnia, neuritis, diplopia, and other central and peripheral nervous system toxicity.Some other ADRs, including rash, itching, hair loss, and facial pigmentation, have occasionally been observed in different individuals.44 Hence, the side effects should be monitored regularly during the clinical use of TwHF preparations. TG tablets, as the main effective ingredients of T. wilfordii, are the most commonly used TwHF-based therapy and display better therapeutic effects than several modifying antirheumatic drugs according to recent clinical observations. 20However, due to the limited number of long-term clinical trials, it is difficult to make a comprehensive evaluation of the efficacy of TG preparations in treating RA.Besides, the pathogenesis and etiology of RA and the exact mechanism of action of TG preparations have not been fully clarified, which seriously hindering the wide acceptance of this preparation in countries other than China and the individualized treatment of RA patients.Moreover, TG and its extracts contain a variety of chemical components that may have synergistic and/or antagonistic effects. 45Overall, approximately 30% of RA patients treated with TG tablets fail to achieve clinical improvement. 46,47o explain this issue, the whole genome expression profile of RA patients treated with TG tablets was investigated to identify gene pairs that were differentially expressed and might to predict a patient's response to TG treatment.Differentially expressed lncRNAs-mRNAs pairs were obtained and a co-expression regulation network was used to search for the individual therapeutic effect of TG treatment.Based on the candidates obtained, the Lnc-ENST00000602558/IGF1 axis, which showed significant differential expression among a series of different lncRNAs and mRNAs, was selected for our study.We then constructed models that were used to predict therapeutic efficacy in RA based on the expression of Lnc-ENST00000602558, IGF1, and Lnc-ENST00000602558/GF1 pairs.Our results showed that when compared with a model based on either IGF1 or Lnc-ENST00000602558 expression alone, a model based on both Lnc-ENST00000602558 and IGF1 expression had a high predictive accuracy and area under ROC curve.More importantly, our new PLS model based on the expression of Lnc-ENST00000602558/IGF1 pairs revealed the need for a deeper understanding of how molecular regulatory networks function in response to therapy. This study examined individual differences in RA patients that might affect their response to treatment with TG tablets.Our findings showed that RA patients have certain individual biochemical characteristics that greatly affect their response to TG.Several differentially expressed lncRNA-mRNA pairs were identified, and a regulated network of lncRNA-mRNA co-expression was used to identify individual differences that affected the efficacy of TG when used to treat RA patients.Thus, the process used to select drugs can be improved by taking individual patient characteristics into account, as it will help patients achieve the best therapeutic effect, while minimizing drug toxicity and cost.Our current study was conducted using RA patients rather than an animal disease model or in vitro cell culture model.We examined how lncRNA-mRNA expression was related to individual differences in the efficacy of TG in treatment of RA.We also established a molecular prediction model based on lncRNA that could be used for the personalized treatment of RA with TG.Our results bring us closer to the use personalized RA treatment in the clinic. Subsequently, cellular functional tests were performed to further verify the regulatory effect of TG on the Lnc-ENST00000602558/IGF1 axis, and to confirm whether the clinical efficacy of TG is related to the gene pair.During in vitro studies, increased IGF1 expression, which is the primary mediator of inflammation, 18,48 was noted upon TNF-α challenge.Here, we revealed that Lnc-ENST00000602558 could reduce TNF-α-induced IGF1 expression.Furthermore, IGF1 expression was increased in MH7A cells when Lnc-ENST00000602558 was expression suppressed by transfection with sh-LncRNA-3, indicating that Lnc-ENST00000602558 was a potent inhibitor of inflammation and acted by regulating IGF1.To help verify that hypothesis, we respectively knocked down Lnc-ENST00000602558 and IGF1 expression in MH7A cells, and subsequently evaluated the secretion of pro-inflammatory factors after treatment with TNF-α and TG.As expected, the inhibitory effect of TG on secretion of proinflammatory factors by MH7A cells was significantly blocked when Lnc-ENST00000602558 expression was knocked down.Nevertheless, Lnc-ENST00000602558 had little influence on inflammation when IGF1 was knocked down in MH7A cells, suggesting that Lnc-ENST00000602558 primarily regulates RA via IGF1 expression and a downstream pathway. Our study also revealed the role of Lnc-ENST00000602558 as an important regulator of the IGF1 signaling pathway.To date, the transcriptional regulation of IGF1 expression and pro-inflammatory factor secretion has largely been attributed to the IGF1 transcriptional and translation module, which plays an integrative role in relaying physiological signals to a key subset of transcription factors that direct and coordinate the expression of pro-inflammatory genes. 22,49Our data demonstrate that Lnc-ENST00000602558 is an essential regulator of this transcriptional module, and the basis for this conclusion is manifold.First, Lnc-ENST00000602558 deficiency in MH7A cells shows striking parallels to the phenotype observed in IGF1 overexpressing MH7A cells, including altered expression of pro-inflammatory factor genes and changes in the secretion of pro-inflammatory factors.In addition, IGF1 knockdown restored the impaired inflammatory effects caused by Lnc-ENST00000602558 depletion, including altered gene expression and secretion of pro-inflammatory factors.Finally, an in vitro FISH analysis revealed that Lnc-ENST00000602558 directly interacts with the IGF1 protein.The mechanistic studies presented here suggest that inhibition of a pro-inflammatory effect can occur independent of canonical signaling events, as lncRNA directly binds with its regulator in macrophages.It is conceivable that the mechanisms utilized by Lnc-ENST00000602558 might be more complex than the model we propose. As a lymphohemopoietic cytokine, IGF-1 has profound positive effects on immune function. 50IGF-1 binds to the T and B cells and induces anti-CD3-stimulated T cell proliferation. 51It also activates T cell Akt, thereby enhancing lymphocyte survival. 52Recently, IGF-1 reportedly prevents cord blood T cells from spontaneous apoptosis when cultured in a serum-free medium. 53ence, IGF-1 influences the onset of inflammatory diseases, thereby, the abundance and profile of IGF-1 are used to serve as important determinants of signaling of RA.A recent study showed that the IGF-1 levels in the serum and synovial fluid were significantly lower in patients with RA. 54 However, IGF-1 have limitations in reflecting clinical symptoms such as joint swelling or tenderness when evaluating disease activity of patients with RA.Therefore, in the present study, we investigated the Lnc-ENST00000602558/IGF1 axis to assess the biomarkers between responsive and nonresponsive RA patients to TG tablets and to identify the candidate gene biomarkers according to both the differential expression patterns and the network topological features. The study reveals a previously undiscovered function of a signaling network consisting of Lnc-ENST00000602558 and IGF1 in RA patients treated with TG.Here, we systematically integrated microarray data generated by a differential gene expression analysis with the topological features of a gene signal transduction network.Our mechanistic studies revealed a pathway which regulates IGF1 expression via Lnc-ENST00000602558.Additionally, our results indicate a possible role for the Lnc-ENST00000602558/ IGF1 axis in regulating the expression and secretion of pro-inflammatory factors.Our proposed molecular mechanism suggests that MH7A cell-derived Lnc-ENST00000602558 acts as a novel and important factor that improves the therapeutic effect of TG on RA by inhibiting IGF1 expression.These findings increase our understanding of various molecular aspects of inflammation, and contribute to the development of strategies for predicting the therapeutic effect of TG treatment.In summary, we propose that in RA patients, increased levels Lnc-ENST00000602558 caused by TG treatment may contribute to differences in clinical response to TG treatment.Beyond the established adverse effects of protracted corticosteroid use, the Lnc-ENST00000602558/IGF1 axis may be a biomarker and molecular target for use in treatment of RA. F I G U R E 1 Identification of differentially expressed LncRNAs and mRNAs that responded to TG treatment.(A) Transcriptome changes in TG responder versus TG non-nonresponder patients.Each dot represents a single gene transcript level as a fold-change.Genes that were significantly upregulated are shown in red and genes that were significantly downregulated are shown in blue.(B and C) Heat map showing hierarchical clustering of lncRNAs (B) and mRNAs (C) that showed changes in the comparison between the responder (n = 6) and nonresponder (n = 6) patients.In the cluster analysis, red represents upregulated genes, and green represents downregulated genes.(D) KEGG pathway enrichment analysis of the commonly changed mRNAs in TG responder versus TG non-nonresponder patients.The red column shows pathways related to inflammatory reactions.KEGG, Kyoto Encyclopedia of Genes and Genomes; lncRNAs, long noncoding RNAs. F I G U R E 2 Identification of candidate biomarker genes that predict response to TG based on the discovery cohort.(A) Venn diagram representing common significantly changed transcripts from 28 coexpressed gene pairs and 57 differentially expressed genes, which were the RA therapeutic targets, as obtained from the DrugBank database.(B) The 17 major gene submodule constructed using the direct interactions among those genes.Red and blue circle nodes represent lncRNAs that were upregulated and downregulated, respectively.The node sizes represent the fold-change in gene expression level in ascending order.The yellow lines represent co-expression content.(C) The levels of Lnc-ENST00000602558 and IGF1 expression were detected by quantitative PCR analysis.Each dot shows the expression levels of candidate genes in each individual patient (n = 25 and 15 for nonresponder and responder groups in the validation cohort, respectively).18S and GAPDH were the internal reference.Error bars represent the standard error of the mean.****p < .0001.GAPDH, glyceraldehyde phosphate dehydrogenase; IGF1, insulin-like growth factor 1; lncRNAs, long noncoding RNAs. Table 3 ) . As shown in Table4, the PLS model based on ENST00000602558 or IGF1 expression alone was less accurate in predicting a patient's response to TG tablets when compared to the PLS-based model, which was constructed based on the ENST00000602558/IGF1axis and T A B L E 1 Clinical and laboratory parameters of RA patients enrolled in the current study.
2024-01-19T05:09:26.282Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "3dfd4b76ba84a84727ed677960cf8a94ab50e22e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/iid3.1098", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3dfd4b76ba84a84727ed677960cf8a94ab50e22e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257898614
pes2o/s2orc
v3-fos-license
Risk Assessment of Bauxite Maritime Logistics Based on Improved FMECA and Fuzzy Bayesian Network : Because of the many limitations of the traditional failure mode effect and criticality analysis (FMECA), an integrated risk assessment model with improved FMECA, fuzzy Bayesian networks (FBN), and improved evidence reasoning (ER) is proposed. A new risk characterization parameter system is constructed in the model. A fuzzy rule base system based on the confidence structure is constructed by combining fuzzy set theory with expert knowledge, and BN reasoning technology is used to realize the importance ranking of the hazard degree of maritime logistics risk events. The improved ER based on weight distribution and matrix analysis can effectively integrate the results of risk event assessment and realize the hazard evaluation of the maritime logistics system from the overall perspective. The effectiveness and feasibility of the model are verified by carrying out a risk assessment on the maritime logistics of importing bauxite to China. The research results show that the priority of risk events in the maritime logistics of bauxite are “pirates or terrorist attacks” and “workers’ riots or strikes” in sequence. In addition, the bauxite maritime logistics system is at a medium-to high-risk level as a whole. The proposed model is expected to provide a systematic risk assessment model and framework for the engineering field. Introduction As a raw material for many industrial products, the demand for bauxite in China has shown a rising trend in recent years.Due to the shrinking of domestic bauxite resources and the impact of environmental protection policies, China needs to import a large amount of bauxite from abroad every year to meet the demand, and the degree of foreign dependence is continuously increasing.According to Chinese national customs statistics, in 2019 alone, China's bauxite imports reached 101 million tons, accounting for 72.4% of the global bauxite seaborne trade volume.Among them, the imported bauxite mainly comes from countries with rich bauxite resources such as Guinea and Australia. As one of the key links in the import of bauxite, maritime logistics have the characteristics of long distances, numerous transit nodes, and a complex sea environment.There are many potential unsafe factors.In addition, bauxite has the characteristic of easy fluidization, which further interferes with the robustness and reliability of maritime logistics links.Therefore, it is necessary to evaluate the risks of China's imported bauxite shipping logistics to minimize the impact of potential risk factors in shipping logistics on the stable and continuous imports of China's bauxite, and to ensure the orderly and healthy development of China's aluminum industry. The failure mode effect and criticality analysis (FMECA) is widely used in the field of risk assessment and reliability analysis.However, the traditional FMECA method has many limitations, which are mainly reflected in the incompleteness of risk characterization parameters, the lack of parameter importance differences, and the limited discrimination of risk priority number (RPN) values [1,2].In response to the above problems, scholars at home and abroad have carried out a lot of research. Based on the characteristics of plastic production, Gul, Yucesan, and Gelik [1] proposed an improved FMEA combined with the fuzzy Bayesian network (FBN) to evaluate failures in plastic production.The risk characterization parameters were reconstructed in the model and weighted by the fuzzy best-worst method.Similarly, Wan et al. [3] combined the fuzzy belief rule method with Bayesian networks to establish a new maritime supply chain wind assessment model.In the model, three sub-parameters were introduced to characterize the consequences of the failure event, and then a more complete risk characterization parameter system was constructed, and the parameters were weighted through the analytic hierarchy process (AHP) method.In view of the advantages of the Bayesian network (BN) in dealing with uncertainty, Ma et al. [4] proposed a Bayesian network construction method that combined FMEA and a fault tree analysis (FTA) for the system security assessment.Liu et al. [5] proposed a new risk priority model based on the D number and gray correlation to optimize FMEA and then conduct a risk assessment. Considering that there is a large amount of uncertain information in the evaluation process, fuzzy set theory has been widely used due to its flexibility, reliability, and strong operability in handling uncertain information.Lee et al. [2] proposed a new fuzzy comprehensive evaluation method using structural importance and fuzzy theory, which effectively dealt with subjective ambiguity under uncertain conditions.In addition, Renjith, Kumar, and Madhavan [6] proposed a fuzzy RPN method to prioritize system failures.This method evaluated risk parameters through fuzzy language variables, and through the IF-THEN rule library to connect the language variables to the fuzzy RPN, which effectively overcomes the shortcomings of the traditional RPN method.Alyami et al. [7] introduced Bayesian networks based on fuzzy rules and developed an advanced failure mode and effect analysis (FMEA) method to assess the criticality of dangerous events in container terminals. Based on the above research, this paper proposes a systemic risk assessment model combining FBN and improved ER theory based on the improved FMECA (see Figure 1).In this model, a new risk characterization parameter system is constructed, and the parameters are weighted through appropriate methods, while the fuzzy set theory is used to deal with the uncertainty in the expert scoring process.This is accomplished by creating a fuzzy rule-based on the confidence structure, and making full use of Bayesian network reasoning technology, which can effectively obtain an accurate evaluation and clear rating of failure modes.The improved evidence reasoning theory (ER) can effectively integrate the results of fuzzy BN inference to realize the assessment of the system risk. Literature Review Maritime logistics play an increasingly important service sector role to support the fast development of domestic and international trade.Efficiency and effectiveness are the two critical concerns in logistics management [8].Efficiency means less spending to achieve more output, while effectiveness means reaching the goal in an uncertain situation.Such goal achievement could make the operations leaner and more efficient, but simultaneously make the logistics vulnerable [9].Therefore, maritime logistics are exposed to Literature Review Maritime logistics play an increasingly important service sector role to support the fast development of domestic and international trade.Efficiency and effectiveness are the two critical concerns in logistics management [8].Efficiency means less spending to achieve more output, while effectiveness means reaching the goal in an uncertain situation.Such goal achievement could make the operations leaner and more efficient, but simultaneously make the logistics vulnerable [9].Therefore, maritime logistics are exposed to various natural and man-made risks [10].How to make this a safe operation becomes a more critical issue considering various risk factors and attracting more attention from researchers and practitioners.Research addressing container maritime logistics has grown significantly in recent years.There is little research addressing bulk cargo maritime logistics [11], e.g., nickel mineral, bauxite, grain, etc.This study takes references from broader areas in maritime research. Risk is a major influencing factor that could be interpreted as the probability that an event or action may adversely affect the anticipated goal in maritime transportation [12].Risk identification, analysis, assessment, and mitigation construct the cycle of risk management.Maritime logistics risk management is a decision-making process to minimize the adverse effects of accidental losses based on risk assessment [13].Compared to traditional risk management, maritime logistics risk management aims to identify and mitigate risks from the perspective of the entire supply chain [14]. Some research focuses on risks related to shipping activities.Siqi Wang et al. [15] identified the risk factors of the dry bulk maritime transportation system from the aspects of people, ships, and cargo, and used the Markov method and the multi-state system method to integrate the probability model of the maritime traffic safety risk state.Jiang, M et al. [16] took the Maritime Silk Road as the research object, and concluded that ship factors are the main factors affecting the occurrence of accidents through an accident data analysis.Øyvind Berle et al. [17] analyzed the failure factors of the dry bulk shipping supply chain from the aspects of supply, capital flow, transportation, communication, internal operation/capacity, and human resources. There is some research focusing on the risks resulting from the cargo itself.For example, Delyan Shterev [18] studied the safety of liquefiable bulk cargo by sea and pointed out that the relationship between the combined system of ships, cargo, and people can be used to reduce the accidents of liquefiable cargo by sea.Munro and Mohajerani [19] analyzed seven cases of a ship capsizing from the views of cargo fluidization and concluded that the main reason was the excessive moisture of cargo.Ju et al. [11] developed a discrete element method (DEM) to simulate the whole process of liquefaction of fine particle cargo, and identify the key parameters that lead to the liquefaction.Lee [20] analyzed a marine accident caused by nickel ore liquefaction and indicated reducing the risk by changing homogeneous loading into alternate loading through experiments.Daoud, Said, Ennour, and Bouassida [21] combined physical experiments and numerical methods to improve the understanding of cargo liquefaction mechanisms and security in maritime transportation. (2) Risk Assessment Methods in Maritime Logistics. Methods developed and applied in maritime logistics risk management can be divided into qualitative and quantitative.Some research highlights managing the uncertainties in assessing the risks in maritime logistics [3,22,23].Khan et al. [24] proposed an objectoriented Bayesian network to predict the ship-ice collision probability.Wan et al. [3] established a model combined with a belief rule base and Bayesian network to assess the risk of the maritime supply chain considering the data collected were highly uncertain.Some literature analyzes and assesses the risk in the maritime supply chain from the perspective of visibility, robustness, and vulnerability.For example, because of the fuzziness and uncertainty of the risk assessment by experts, Emre Akyuz et al. [25] adopted the fuzzy number and bow-tie method to carry out a quantitative comprehensive risk analysis on the liquefaction risk of dry bulk carriers.Liu et al. [26] combined a multi-centrality model and robustness analysis model to analyze the vulnerability of the maritime supply chain, and verified the feasibility of the model through the Maersk shipping line in its Asia-Europe route.Zavitsas, Zis, and Bell [27] analyzed the link between the environment and resilience performance of the maritime supply chain and built a framework to reduce operating costs and risks that may disrupt the maritime supply chain.Vilko et al. [28] identified and assessed the risk of multi-modal maritime supply chains from the perspective of visibility and control.Lam and Bai [10] used the quality function deployment approach to improve the resilience of the maritime supply chain. Much literature related to risk focuses on the specific risk in the container supply chain, especially after the 911 terrorist attacks in 2001 [13,23,29].With the development of IT technology, more information could be visible on the official website of a shipping company.Cyber-attack has become one of the risk sources; Polatidis, Pavlidis, and Mouratidis [30] proposed a highly parameterized cyber-attack path discovery method to evaluate the risk of dynamic supply chain maritime risk management.This method is more efficient than the traditional method because it can output the most probable paths instead of all paths. The above discussion indicates that the risks in a maritime logistics context have aroused academic attention, but still are under research, especially in the bulk cargo maritime logistics risk analysis and assessment.More importantly, most literature focuses on the container supply chain [12,13,22,23] or some specific risk in the container supply chain [13,31].However, some mineral cargo belongs to fluidization cargo that needs to pay more attention to the water percentage in the cargo, such as bauxite, nickel mineral, etc.Some of them need to ventilate and grain.These kinds of maritime logistics are very different from how the container supply chain lies in the fluidization of cargo to become free surface effect or solid-liquid two separate flow layers while in maritime transportation easily.During maritime transportation, the permeability of cargo is low, owing to human error or bad ventilation, resulting in the water percentage in the cargo being up to the limit value or out of the limit, which could further form a free surface effect.If it happened with bad sea conditions or other natural and man-made disasters, it would be easy to make the ship capsize. Consequently, this paper aims to develop a systemic risk assessment model combining FBN and improved ER theory based on the improved FMECA and improve the safety of bauxite maritime logistics.To start with, the identification of potential failure modes obtained by the failure mode and effects analysis (FMEA), based on the failure relationships embedded in the failure modes, the Bayesian network construction, and the probability parameter defined are discussed.Risk characterization parameters are important indicators used to characterize and describe the hazards of failure events.A scientific, reasonable, and complete construction of a risk characterization parameter system is conducive to a more comprehensive and accurate grasp of the characteristics of risk factors and improves the accuracy and reliability of the risk assessment results.Traditional FMECA uses the three parameters of occurrence, the severity of consequences, and detection to characterize risk factors [32].Given the characteristics of maritime logistics, this paper re-introduces three sub-parameters based on the above three parameters to measure and distinguish the severity parameter S after the occurrence of the risk.Based on this, a relatively complete structure is constructed in the form of a risk parameter index hierarchical structure (risk characterization parameter system (see Figure 2)). Construction of Risk Assessment Model for Bauxite Maritime Supply Chain bility of the risk assessment results.Traditional FMECA uses the three parameters of occurrence, the severity of consequences, and detection to characterize risk factors [32].Given the characteristics of maritime logistics, this paper re-introduces three sub-parameters based on the above three parameters to measure and distinguish the severity parameter S after the occurrence of the risk.Based on this, a relatively complete structure is constructed in the form of a risk parameter index hierarchical structure (risk characterization parameter system (see Figure 2)).When maritime logistics are affected by an uncertain event, it usually manifests as the normal transportation of ocean logistics being disturbed and the transportation time being delayed.In serious cases, the maritime logistics chain may even be disconnected.For bauxite maritime logistics, it reduces the reliability of maritime logistics services.For time-and temperature-sensitive goods, the consequences of time delays are often more serious than for ordinary goods.Additional costs refer to the increase in a series of additional expenses/costs that are affected by risk factors, such as additional management costs or expenses incurred by risk drivers.Safety and security losses refer to the damage to the physical elements participating in or constituting the maritime logistics after being affected by the failure event, such as personal injury, damage to transported goods, and damage to port infrastructure or ships.To a certain extent, the impact of a risk event on the system can be summarized by the increase in additional costs.This paper aims to measure the consequences' parameters more comprehensively and in detail from different When maritime logistics are affected by an uncertain event, it usually manifests as the normal transportation of ocean logistics being disturbed and the transportation time being delayed.In serious cases, the maritime logistics chain may even be disconnected.For bauxite maritime logistics, it reduces the reliability of maritime logistics services.For timeand temperature-sensitive goods, the consequences of time delays are often more serious than for ordinary goods.Additional costs refer to the increase in a series of additional expenses/costs that are affected by risk factors, such as additional management costs or expenses incurred by risk drivers.Safety and security losses refer to the damage to the physical elements participating in or constituting the maritime logistics after being affected by the failure event, such as personal injury, damage to transported goods, and damage to port infrastructure or ships.To a certain extent, the impact of a risk event on the system can be summarized by the increase in additional costs.This paper aims to measure the consequences' parameters more comprehensively and in detail from different perspectives such as the physical elements of the system, time, and currency to avoid the unclear description caused by the unified overview. Risk Parameter Weighting Based on AHP-Entropy Weight Method Given the obvious hierarchical structure relationships and differences in importance between the constructed risk parameter systems, this paper uses appropriate weight coefficients to quantify the relative importance of different parameters. (1) AHP According to the established risk parameter characterization system, the parameters of the same layer are compared in pairs, and a judgment matrix is constructed to express the comparison of the relative importance of the parameters of this layer concerning the parameters of the upper layer.After passing the consistency check, the parameter weight vector is determined by calculating the weight vectors of different experts for the parameters of the same layer, using the arithmetic average method to gather the evaluation results of the experts on the parameters of the same layer to determine the final weight vector of the parameters of the layer.Finally, the weight value of the parameter of the same layer and the weight value of the upper layer parameter to which it belongs are weighted to calculate the comprehensive weight value of the risk parameter in the entire system. (2) Entropy method Entropy is a concept derived from thermodynamics and mainly reflects the degree of chaos in the system.The entropy value in information theory reflects the degree of information carried by an indicator.The smaller the entropy value, the greater the degree of dissociation between the data covered by the indicator, the more information the indicator carries, and the greater the corresponding weight value.When using the entropy method to determine the parameter weight vector, it is necessary to combine expert experience to score these parameters.The score is between 1 and 10.The higher the score, the greater the importance of the risk parameter.The steps for calculating the parameter weight vector using the entropy method are as follows [33]. (i) Construction of evaluation matrix Assuming that m experts evaluate and score n risk parameters, the original evaluation matrix can be formed R = Among them,x ij refers to the evaluation value of the jth risk parameter by the ith (ii) Initial matrix normalization Standardizing the evaluation matrix can ensure the correctness and simplicity of the calculation results.The standardization process is as follows: The evaluation matrix R * = r ij m×n , r ij 0 ≤ r ij ≤ 1 obtained after standardization is the standard value of the jth risk parameter on the ith expert. (iii) Calculate the proportion of points To calculate accurate information entropy, it is necessary to calculate the proportion of points P ij assigned by the ith expert to this parameter under the jth risk parameter. (iv) Calculate information entropy The formula for calculating the entropy value E j of the jth index is: The information utility value d j = 1 − E j can be obtained according to the value of information entropy. (v) Calculate the entropy weight of risk parameters The larger the information utility value, the more important the parameter, and the greater the weight should be when the parameter is weighted.The formula for calculating the entropy weight of risk parameters is: (3) Determine the overall weight The risk parameter weights obtained by the AHP technique can show the importance that decision-makers place on different parameters.The entropy measure can more objectively reflect the information contained in the risk parameter itself when calculating the weight vector.Hence, by combining these two methods to achieve the purpose of effectively combining the advantages of the two, a more scientific and reasonable parameter weight vector can be obtained.The formula for determining the comprehensive weight of parameters using the AHP-entropy weight method is [32] where v j and w j , respectively, represent the parameter weight values obtained according to the AHP and entropy weight method, in which α is the preference coefficients, and 0 ≤ α ≤ 1. Considering that the entropy measure can reduce the deviation caused by human factors to a certain extent, and then combined with the opinions of the expert group, this paper defines the preference coefficient as 0.6.From this, the comprehensive weight vector can be obtained By issuing questionnaires to domain experts (see Table 1 for expert details), after excluding invalid samples, a total of four valid questionnaires were received.Based on the survey results, a judgment matrix can be further obtained.Then, we conducted a consistency check on the judgment matrix, and the low inconsistency ratios (<0.1) of all pairwise comparisons verified the rationality of the results.It turns out that the calculated weights are consistent.Afterwards, the entropy weight method and AHP were combined to obtain the weight of each risk characterization parameter (as shown in Table 2).When using the established risk characterization parameter system to assess the hazard of potential risk events in maritime logistics, due to the lack of historical data in the engineering field and the particularity of the risk parameters themselves, in practice, failures are usually evaluated using predetermined scores, thus making a judgment on the event's severity in a certain aspect.The classification of the characteristic parameters of dangerous events is often a vague concept, usually relying on expert experience to give low, relatively low, high, relatively high, and other vague language judgments, and determining the corresponding evaluation level based on the vague language.Therefore, the fuzzy set theory is introduced into the determination of parameter levels that rely on expert ratings to ensure that the quantitative results of risk parameters are more accurate and in line with the actual situation.Experts give an evaluation level to the parameters according to the pre-defined fuzzy rating set and then construct the fuzzy number according to the membership function.Common membership functions include triangles and trapezoids. According to the relevant literature, the shape of the membership function has a significant impact on the outcomes of fuzzy operations, and triangular and trapezoidal fuzzy numbers are thought to be more efficient [34].Among them, trapezoidal fuzzy numbers can be turned into crisp values, interval numbers, and triangular fuzzy numbers by adjusting parameters, which can intuitively explain fuzzy variables [35].Hence, the trapezoidal membership function is more scalable and widely applicable in the decisionmaking of complex problems [36].Since the trapezoidal membership function is more in line with the objective evaluation situation, the trapezoidal fuzzy number is used to deal with the linguistic variables of the expert evaluation. Assuming that real numbers a, b, c, and d (a < b = c < d) form the four endpoints of the trapezoidal fuzzy number, then its membership function µ (x) can be defined as follows: Before the expert rating, it is necessary to define the fuzzy rating set with the attribute level and the corresponding membership function.This paper divides the constructed risk characterization parameter variables into five levels.The different level attributes of the corresponding parameters, linguistic variables, and corresponding fuzzy numbers are shown in Tables 3-7.The rating variables of these five parameters are represented by trapezoidal fuzzy numbers.Among them, the probability of the occurrence of failure, the degree of detection, and the variables that characterize the consequences of time delay, additional cost, and safety and security loss corresponding to the rating standard belong to the membership function as shown in Figure 3. SF1 Light The goods, equipment, or system are slightly damaged, but the functions are complete, and the maintenance is convenient and fast; the number of minor injuries does not exceed 2 (0, 0, 1, 2) SF2 Relatively light The equipment or system is slightly damaged, and the maintenance is more convenient; the damage rate of the goods is 1-5%; three people or more have been slightly injured (0.5, 2, 3, 4.5) SF4 Relatively serious The equipment or system is seriously damaged and inconvenient to maintain; the damage rate of goods reaches 10-20%; 1-2 people are seriously injured (5.5, 7, 8, 9.5) SF5 Severe The equipment or system is seriously damaged, and transportation cannot be carried out; the proportion of goods damaged is more than 20%; personnel deaths occur (8, 9, 10, 10) Fuzzy Rating Result Calculation After obtaining the evaluation value given by the expert in the form of a trapezoidal fuzzy number (see Table 1 for expert details), the evaluation information among different experts is integrated with the help of an uncertain ordered weighted averaging (UOWA) operator.This method determines the weight of the experts by comparing the degree of difference between the fuzzy number of the evaluation value given by each expert and the average fuzzy number obtained by combining the opinions of different experts.The specific calculation method is as follows. There is an expert group composed of n experts to evaluate a certain type of failure mode, and convert the evaluation level of the Kth expert to the ith parameter variable into the j trapezoidal fuzzy number as = ( , , , ) , among, ∈ (, , , ).j = (1, 2, 3, 4, 5), K = (1, 2 … n).The following uses the UOWA operator to synthesize the fuzzy number of the evaluation opinions of the experts. Fuzzy Rating Result Calculation After obtaining the evaluation value given by the expert in the form of a trapezoidal fuzzy number (see Table 1 for expert details), the evaluation information among different experts is integrated with the help of an uncertain ordered weighted averaging (UOWA) operator.This method determines the weight of the experts by comparing the degree of difference between the fuzzy number of the evaluation value given by each expert and the average fuzzy number obtained by combining the opinions of different experts.The specific calculation method is as follows. There is an expert group composed of n experts to evaluate a certain type of failure mode, and convert the evaluation level of the Kth expert to the ith parameter variable into the j trapezoidal fuzzy number as R k = a (2) Calculate and measure the distance between R k and ∼ R m . (3) Calculate the similarity between R k and ∼ R m .For trapezoidal fuzzy numbers is the similarity between the trapezoidal fuzzy number R k = a (4) Fuzzy number assembly. The aggregation method of UOWA operators can synthesize the fuzzy number of different expert evaluation values to obtain the final result. is defined as the weight coefficient. After using the UOWA operator to gather expert evaluation opinions, the result obtained is still a trapezoidal fuzzy number.By combining it with the membership function, the fuzzy evaluation results of different parameters under a specific failure mode after comprehensive expert opinions can be obtained.The specific conversion process is shown in Figure 4. The aggregation method of UOWA operators can synthesize the fuzzy number of different expert evaluation values to obtain the final result. Among them, is defined as the weight coefficient. After using the UOWA operator to gather expert evaluation opinions, the result obtained is still a trapezoidal fuzzy number.By combining it with the membership function, the fuzzy evaluation results of different parameters under a specific failure mode after comprehensive expert opinions can be obtained.The specific conversion process is shown in Figure 4.By combining the fuzzy number of the fuzzy rating evaluated by the expert group with the membership function of each attribute rating, the membership degree corresponding to the highest point u (x) is obtained, and finally, the fuzzy parameter variable set of the corresponding attribute evaluation is obtained.According to this method, the fuzzy set of each risk attribute parameter can be obtained, respectively.As shown in Figure 4 Which can also be expressed as ), and among them, L, N, M, K ∈ [0, 1].By normalizing the set of fuzzy membership degrees, the prior probability evaluation value of the risk factor under specific parameters can be obtained. Identification of Failure Mode of Bauxite Ocean Maritime Logistics Bauxite maritime logistics refer to the entire logistics process of goods transported by sea from the port of departure to the port of destination.From the perspective of transportation elements, it can be divided into transportation nodes and transportation routes.Transportation nodes include bauxite export ports, hub trans-shipment ports, and destination ports.The risk of ships at port nodes is mainly due to subjective human factors.Transportation routes refer to specific transportation processes other than nontransport nodes.Ships will suffer risks due to interference from the external environment, cargo status, the ship's operating conditions, and crews during the voyage.To ensure the scientificity, objectivity, and rationality of the selected risk factors, this paper consults with experienced experts in the industry field and consults relevant materials to obtain the main potential risk factors of China's imported bauxite maritime logistics (see Table 8).Improper operation by the crew FM 4 Piracy or terrorist attack FM 5 Terrible sea conditions FM 6 Bauxite-free surface effect FM 7 Ship facilities and equipment failure Among these failure modes, the first one is related to bauxite transit reliability.Guinea has been China's largest source of imported bauxite in recent years.However, the domestic political situation in Guinea is unstable, and worker riots or port strikes have occurred from time to time.For example, in September 2017, riots broke out in the Bokeh bauxite area of Guinea.Protesters exchanged fire with the police, and bauxite production activities were severely hindered.This kind of failure mode can cause injuries or damage to cargo and equipment, delaying ships, and causing severe production standstills and supply disruptions. The second failure mode is related to the port's loading and unloading capacity and the arrival of ships.Once the port is congested, it will inevitably lead to a delay in the transportation of bauxite, which will lead to an increase in transportation costs; the third failure mode is related to the transport personnel, i.e., the crew themselves.In general, operational errors caused by a lack of safety awareness or lack of emergency response skills, and the crew's mental health problems caused by long-term sea voyages will all have a certain impact on maritime transportation. The fourth failure mode occurs during the transportation of goods.On China's bauxite import route from Guinea in West Africa to Yantai in Shandong, the Gulf of Guinea and the Strait of Malacca are almost the only way to go.However, pirate attacks often occur in these areas.This kind of situation is bound to bring huge property losses and casualties.The fifth failure mode is related to the natural environment.Bad sea conditions (such as heavy rain, typhoon, tsunami, etc.) will have a huge impact on the normal navigation of the ship.In severe cases, the ship will capsize, and the safety of the crew will be threatened. The sixth failure mode is related to the cargo itself.Since the bauxite contains a certain amount of water during the actual transportation, it is easy to fluidize during the bumpy transportation at sea to form a free surface effect, thereby reducing the stability of the ship and causing the loss of cargo or the hull of the ship to capsize.The seventh failure mode is caused by the transport ship.The performance of the ship is degraded due to the aging of the ship's operating facilities or equipment or the hidden danger of the ship's hull structure and integrity, which make it easy to cause failures in the process of cargo transportation and to capsize. Constructing a Fuzzy Rule Database System Based on a Confidence Structure In the process of establishing the traditional fuzzy rule base, IF-THEN rules are used to express the association between the attribute of the premise and the attribute of the conclusion.A basic fuzzy rule base consists of a series of simple IF-THEN rules.Although the established fuzzy rule base can express the fuzzy situation, it is difficult to reflect the slight change of the premise attributes in the conclusion part.Moreover, due to insufficient expert experience and evidence, it is difficult to maintain a completely deterministic relationship between the attributes of the premise and the conclusion.Therefore, the fuzzy rule base form based on the confidence structure is adopted, and the confidence degree is used to express the degree of trust in the conclusion part under the given preconditions.R K : represents the first category of the jth antecedent attribute in the created Kth fuzzy rule, M represents the total number of antecedent attributes, L represents the total number of rules in the fuzzy rule base, D i (i ∈ {1, 2, • • • N}) indicates the type of i conclusion, and its corresponding confidence is β iK .Under the condition of entering the a priori A K j attribute, if ∑ N i=1 β iK = 1, it indicates that the Kth rule created is complete, if ∑ N i=1 β iK = 0, it means that the input condition attribute cannot be judged. According to the characteristics of the constructed FBN, it is necessary to construct a fuzzy rule base based on the confidence structure of the risk parameters C and S, respectively.This paper takes the fuzzy rule base of creating C parameters as an example to introduce the process of creating the rule base.In the process of creating the rule base, the parameters O, S, and D are used as the antecedent attributes, taking the criticality of the failure C as the conclusion attribute to create a fuzzy rule base based on the confidence structure.R l : IFO i and S i and among them, the variables i, j, k, m ∈ {1, • • • 5} are used to estimate the criticality of failure modes, and are expressed as "low, relatively low, medium, relatively high, and high". The determination of the confidence level in the rule base can be based on the accumulated knowledge of past events or subjective experience from domain experts.The former method often requires a large amount of objective data to support, and the knowledge of domain experts is used to reasonably determine all the rules in the rule base.Confidence is often subjective and difficult, especially when faced with a large rule base.Given this situation [7], a proportional method is proposed to ensure the rationalization of the confidence distribution in the rule base.However, the main drawback of this method is that it does not consider the impact of parameter weights.When building the confidence of the rule base, this paper uses the proportional method to determine the confidence distribution based on the importance of each risk parameter.In the process of creating this rule base, all the attribute parameters in the IF part and the THEN part are described by five grade variables.Therefore, the confidence of a certain grade variable in the conclusion attribute parameter can be determined by making the antecedent attributes' variables belong to the same normalized weights of the risk parameters of the grade variables that are summed.Since the antecedent attribute contains three five-valued risk parameters, the constructed fuzzy rule library based on the confidence structure has a total of 125 rules.Due to space limitations, only some of the rules in the rule base are listed in Table 9. Bayesian Network Construction Because the Bayesian network has a good ability to describe the uncertain non-linear relationship between events, it can handle the fuzzy rule base system based on the confidence structure well, and it has efficient reasoning ability.Therefore, this paper will use Bayesian network inference technology to describe and implement the fuzzy rules based on the confidence structure. Based on the relationship between the characteristic attributes of the failure mode, combined with the constructed risk parameter characterization system, the definition takes the failure occurrence degree O, the detection degree D, and the parameter time delay ST, additional cost SC, and safety and security cost SF that characterize the degree of risk consequence as the root node, taking the consequence severity S as the intermediate node, and the failure mode criticality C as the leaf node to construct the topology model shown in Figure 5. constructed fuzzy rule library based on the confidence structure has a total of 125 rules.Due to space limitations, only some of the rules in the rule base are listed in Table 9. Bayesian Network Construction Because the Bayesian network has a good ability to describe the uncertain non-linear relationship between events, it can handle the fuzzy rule base system based on the confidence structure well, and it has efficient reasoning ability.Therefore, this paper will use Bayesian network inference technology to describe and implement the fuzzy rules based on the confidence structure. Based on the relationship between the characteristic attributes of the failure mode, combined with the constructed risk parameter characterization system, the definition takes the failure occurrence degree O, the detection degree D, and the parameter time delay ST, additional cost SC, and safety and security cost SF that characterize the degree of risk consequence as the root node, taking the consequence severity S as the intermediate node, and the failure mode criticality C as the leaf node to construct the topology model shown in Figure 5.To make better use of BN for reasoning, the fuzzy rule library based on the confidence structure needs to be transformed into the form of a conditional probability table.Taking rule 2 in Table 10 as an example, the following transformation can be carried out. : IF , , , (75%, ), 25%, Relatively low , (0%, Medium), 0%, Relatively high , .0%, high That is, given O1, S1, and D2, the probability of child node ( = 1, 2, ⋯ , 5) is (0.75, 0.25, 0, 0, 0) , or it can be expressed as ( | , , ) = (0.75, 0.25, 0, 0, 0) .Therefore, the fuzzy rule library of the confidence structure can be transformed into the form of the conditional probability distribution.To make better use of BN for reasoning, the fuzzy rule library based on the confidence structure needs to be transformed into the form of a conditional probability table.Taking rule 2 in Table 10 as an example, the following transformation can be carried out.R 2 : IF O 1 , S 1 , D 2 , THEN{(75%, low),(25%, Relatively low), (0%, Medium), (0%, Relatively high),}(0%, high) That is, given O 1 , S 1 , and D 2 , the probability of child node C m (m = 1, 2, • • • , 5) is (0.75, 0.25, 0, 0, 0), or it can be expressed as P(C m |O 1 , S 1 , D 1 ) = (0.75, 0.25, 0, 0, 0).There- fore, the fuzzy rule library of the confidence structure can be transformed into the form of the conditional probability distribution.After the rule base is transformed into the created BN to describe the conditional probability distribution of the correlation degree between nodes, the analysis of the criticality of the failure mode is transformed into the calculation of the edge probability of the child node C. According to the method in Section 2, the expert evaluation opinions are represented in the form of trapezoidal fuzzy numbers and processed and transformed.The parameter variables can be assigned to different linguistic variables in the form of membership degrees, and finally discrete fuzzy subset forms, O, D, ST, SC, and SF.In the BN reasoning process, the sum of the probability values of different states of all nodes must meet the condition equal to 1; therefore, the membership degrees of the different states of the root node need to be normalized, and the formula is as follows. Among them, P * i is the confidence value of the state of the ith node parameter before normalization, and n is the total number of node parameter states.After normalization, the prior probability values of the different state variables of the root node parameters can be obtained, from which the probability distribution of the intermediate nodes can be calculated, Next, the edge probability of node NC can be obtained, Use the Utility Function to Sort the Criticality To determine the criticality level of each failure mode, it is necessary to sort the criticality of each failure mode and clarify the criticality between different failure modes to help managers quickly and accurately make differentiated management strategies.The criticality of each failure mode obtained by using FBN reasoning is given in the form of fuzzy subsets.Therefore, it is necessary to convert the multi-index risk status value of each risk factor into a clear value for sorting.Therefore, we need to use an appropriate utility function vector to quantify the degree of difference between different risk states, to achieve the overall judgment of the criticality level of the failure mode.Using the utility function U to clarify the criticality of the failure mode is as follows. Maritime Logistics System Risk Assessment The use of FBN can only evaluate the criticality from the failure mode itself, and it cannot effectively evaluate the system risk level.Based on this, the improved ER theory is used to fuse the evaluation results of the hazard degree of each failure mode, and the linguistic variables represented by the confidence structure are used as the input value of the ER to calculate the maritime logistics system risk, and to achieve the purpose of evaluating the maritime logistics system risk from an overall perspective. ER is an important uncertainty reasoning method, which is widely used in information fusion, expert systems, fault diagnosis, and the military.It can better represent uncertain information and synthesize expert opinions.However, the traditional ER theory has the problems of "conflict of evidence" and "robustness".Therefore, we use the DS synthesis algorithm based on the weight assignment and matrix analysis to overcome the limitations of traditional ER to merge the results of each failure mode inferred by FBN [37].Assuming that the evaluation results of three risk factors are integrated, the evaluation level is above the five levels, and the identification framework θ = {L, RL, M, RH, H}, the evaluation results of the three risk factors are shown in Table 11. Table 11.Evaluation results of three types of risk factors. Risk Factor The steps of the DS evidence fusion method based on weight distribution and the matrix analysis are as follows: assuming that the matrix A = (a 1 , a 2 , a 3 , a 4 , a 5 ), , multiply A by B, transpose of the matrix, and obtain the matrix M 1 . (2) In the matrix M 1 , the sum of the non-main diagonal elements is the degree of conflict between the risk factors A and B. The column matrix M 1 formed by the main diagonal elements of M 1 should be multiplied with matrix C to obtain matrix M 2 . Then, the conflict degree K of the three types of risk factors is the sum of all non-main diagonal elements of the matrix M 1 and M 2 .Following the same steps, a limited number of remaining risk factors can be merged, and the conflict degree K of all risk factors can be obtained. (3) Use the improved synthetic formula for weight distribution to calculate the following. where q(A) = ∑ n i=1 m i (A) n represents the average degree of support for all risk factors, and will be allocated to A in the proportion K. Sensitivity Analysis When a new model is proposed and constructed, it needs to go through rigorous testing to verify the reliability and rationality of the proposed model.Especially when it involves a subjective judgment belief structure, the validity of the model is more necessary.A sensitivity analysis is used to study the sensitivity of input to output variables.The input variables can be either parameters or variables.In this research, the confidence parameter corresponding to the root node variable of FBN is used as the input part, and the focus is on the input parameters and the degree to which the change in confidence affects the confidence level of the failure mode.If the constructed FBN is reasonable, the sensitivity analysis should at least satisfy the following two axioms [7].Axiom 1.A slight increase/decrease in the prior subjective probability of each input node should result in a relative increase/decrease in the posterior probability value of the output node. Axiom 2. The total magnitude of the influence of the combination probability change from the x attribute (evidence) on the risk priority value should always be greater than the influence from the x − y(y ∈ x) attribute (sub-evidence) set. Failure Mode Criticality Assessment To obtain the prior probability of risk events more accurately, this paper uses expert judgment to issue questionnaires to four business managers and researchers who have been engaged in bauxite transportation for a long time.In the questionnaire, these experts need to separately evaluate the identified failure modes.The content of the assessment includes five risk parameters and their fuzzy language variable rating levels.Taking the risk factor "worker riot or strike" as an example, Table 12 shows the evaluation of the failure mode by four experts under the five risk parameters.Using the evaluation result of the UOWA operator comprehensive expert group to obtain the assembled trapezoidal fuzzy number, according to the membership function graph of the risk parameter level, the ordinate of the highest point intersected by the trapezoid corresponding to each parameter level is obtained.After normalization, the prior probability distribution of the failure mode under different risk parameters can be obtained.Similarly, the prior probability distributions of all failure modes concerning each parameter can be obtained.After obtaining the prior probability of the root node of the FBN topology structure under different failure modes, combined with the established fuzzy rule base system, the criticality assessment result of the failure mode can be calculated using Equation (13).The specific calculation can be operated by the software Netica.Figure 6 shows the evaluation result of the risk factor "worker riot or strike" using the software for risk reasoning, and the risk status of the risk factor P(C) = (17.2%, 19.4%, 14.5%, 27.4%, 21.4%).That is, the confidence level of the "low" risk status is 17.2%, the "relatively low" confidence is 19.4%, the "medium" confidence is 14.5%, the "relatively high" confidence is 27.4%, and the "high" confidence is 21.4%.In the software, any risk input modification related to the five risk parameters can trigger the change of the node state, which helps to automatically conduct a real-time risk assessment of any target risk factor in the bauxite maritime logistics link.The risk status of the risk factors expressed in language variables needs to be further clarified by utility functions to prioritize the risks.The risk assessment value of the risk factor "worker riots" can be expressed as the utility function vector calculation in Section Risk Assessment of Bauxite Maritime Logistics System After obtaining the hazard level of each failure mode of maritime logistics, an improved synthetic algorithm based on weight distribution and the matrix analysis is used to fuse the assessment results of individual risk factors to obtain the risk status of the bauxite shipping supply chain system.The fusion results are shown in Figure 7. China's studied imported bauxite shipping supply chain system risk index is described as 18.32% low, 30.18% relatively low, 26.93% medium, 17.71% relatively high, and 6.94% high.The system risk value is 41.419 calculated by using the utility function vector.The confidence that the maritime logistics system is at a medium-to high-risk level reaches 51.58%, indicating that the overall system risk is relatively high.The risk status of the risk factors expressed in language variables needs to be further clarified by utility functions to prioritize the risks.The risk assessment value of the risk factor "worker riots" can be expressed as the utility function vector calculation in Section 3. 13.In the bauxite maritime supply chain, the hazard level of risk factors is ranked in order FM 4 > FM 1 > FM 7 > FM 2 > FM 5 > FM 6 > FM 3 .It can be analyzed from the CI value that "pirate or terrorist attacks" and "worker riots" are the most harmful, and they are the key risk factors affecting the reliability of bauxite maritime logistics links. Risk Assessment of Bauxite Maritime Logistics System After obtaining the hazard level of each failure mode of maritime logistics, an improved synthetic algorithm based on weight distribution and the matrix analysis is used to fuse the assessment results of individual risk factors to obtain the risk status of the bauxite shipping supply chain system.The fusion results are shown in Figure 7. China's studied imported bauxite shipping supply chain system risk index is described as 18.32% low, 30.18% relatively low, 26.93% medium, 17.71% relatively high, and 6.94% high.The system risk value is 41.419 calculated by using the utility function vector.The confidence that the maritime logistics system is at a medium-to high-risk level reaches 51.58%, indicating that the overall system risk is relatively high. Sensitivity Analysis Results According to the axioms introduced in Section 3.8, the sensitivity analysis is carried out to test the validity and reliability of the Bayesian network based on the fuzzy rule base system.The linguistic variables of all risk parameters should be positively correlated with the CI value, that is, when the language variable of each risk parameter slightly increases or decreases, the value of the risk assessment result's CI value should also become higher or lower.Next, the subjective probability of 10% is re-assigned to the different language variables of each parameter and makes the CI value change in an incremental direction.If the constructed model is reasonable, then the CI value should increase accordingly.Taking "worker riots" as an example, in a single risk parameter and various risk parameter combinations, we raise the value of each language variable belonging to ʺHʺ by 10%, and the current minimum language of the modified risk parameter.The prior probability of the variable is reduced by 10% to keep the total confidence constant, and then axiom 1 and axiom 2 are tested.The results are shown in Table 14.When the prior probability that the language variable belongs to "H" in a single-risk parameter increases by 10%, the final risk assessment result CI value also increases to varying degrees.For example, when the confidence that the risk parameter "O" is in the "H" state increases by 10%, the CI value increases by 2.97.When the confidence that the risk parameter "D" is in the "H" state increases by 10%, the CI value increases by 1.875, and axiom 1 is verified.In addition, when the risk parameters adopt different numbers of combination types, as the number of combinations increases, the risk assessment CI value also keeps changing in ascending order.For example, when the different combination types of risk parameters are "O", "O D", "O D ST", "O D ST SC", and "O D ST SC SF", the risk assessment values have changed to "2.97", "4.845", "5.47", "5.72", and "6.22", respectively.Therefore, axiom 2 is verified in this model. Sensitivity Analysis Results According to the axioms introduced in Section 3.8, the sensitivity analysis is carried out to test the validity and reliability of the Bayesian network based on the fuzzy rule base system.The linguistic variables of all risk parameters should be positively correlated with the CI value, that is, when the language variable of each risk parameter slightly increases or decreases, the value of the risk assessment result's CI value should also become higher or lower.Next, the subjective probability of 10% is re-assigned to the different language variables of each parameter and makes the CI value change in an incremental direction.If the constructed model is reasonable, then the CI value should increase accordingly.Taking "worker riots" as an example, in a single risk parameter and various risk parameter combinations, we raise the value of each language variable belonging to "H" by 10%, and the current minimum language of the modified risk parameter.The prior probability of the variable is reduced by 10% to keep the total confidence constant, and then axiom 1 and axiom 2 are tested.The results are shown in Table 14.When the prior probability that the language variable belongs to "H" in a single-risk parameter increases by 10%, the final risk assessment result CI value also increases to varying degrees.For example, when the confidence that the risk parameter "O" is in the "H" state increases by 10%, the CI value increases by 2.97.When the confidence that the risk parameter "D" is in the "H" state increases by 10%, the CI value increases by 1.875, and axiom 1 is verified.In addition, when the risk parameters adopt different numbers of combination types, as the number of combinations increases, the risk assessment CI value also keeps changing in ascending order.For example, when the different combination types of risk parameters are "O", "O D", "O D ST", "O D ST SC", and "O D ST SC SF", the risk assessment values have changed to "2.97", "4.845", "5.47", "5.72", and "6.22", respectively.Therefore, axiom 2 is verified in this model. Conclusions The supply network for bauxite shipping is becoming more intricate in the highly competitive and unstable global bauxite market.China is a large consumer and importer of bauxite.Under the influence of many un-certain risks, it is crucial to ensure the smooth operation of the maritime logistics of imported bauxite.Therefore, it is essential to develop a reliable and adaptable technique to evaluate the risk associated with bauxite maritime logistics.In the face of traditional FMECA methods, there are problems such as incomplete risk-characterization parameters, failure to reflect the difference in parameter importance, and a limited discrimination of RPN values.This research suggests a systemic risk assessment model combining the FBN and improved ER theory based on the improved FMECA.To accomplish the goal of describing the risk factors more thoroughly and accurately, the improved FMECA adds three sub-parameters to the consequence parameter.It also uses the AHP-entropy technique to weigh the risk parameters.Additionally, a fuzzy rule base system based on the confidence structure is created by fusing the fuzzy set theory and expert knowledge.BN reasoning technology is then used to realize the risk inference of complex systems in uncertain environments, and the weighted utility function vector is used to calculate a variety of risk status indicators.The clusters are converted into numerical values, and then the risk factors are sorted.The improved ER theory is used to aggregate individual risk events to realize the overall judgment of the bauxite maritime logistics system risk.The results show that in the bauxite shipping supply chain, "pirate or terrorist attack" is the most important risk factor, followed by "worker riots", "ship facilities and equipment failures", "port congestion", "bad sea conditions", "bauxite free surface effect", and "improper operation by the crew".Therefore, from a controllable point of view, in the process of importing bauxite in China, the protection of ships on the route should be increased, the ships should be regularly maintained and repaired, and the ship operation skills and emergency response capabilities should be improved.At the same time, full attention should be paid to the free liquid surface effect of bauxite, the water content of bauxite should be reduced as much as possible, and effective measures should be taken to stabilize the goods and reduce the bumps during the transportation of the goods.In addition, China's imported bauxite maritime logistics system is at a medium-to high-risk level as a whole.In summary, the main contributions of this paper are as follows: (1) A systematic risk assessment model is proposed, which can carry out a risk assessment of the system from the local and overall dimensions and can effectively improve the scientificity and accuracy of the risk assessment of the system in an uncertain environment.(2) The improved FMECA can effectively overcome the limitations of the traditional FMECA method, making it more suitable for the field of risk analysis, and improving the reliability and rationality of the risk assessment. Based on FBN 3 . 1 . Determination of Risk Characterization Parameters 3.1.1.Construction of Risk Characterization Parameter System Figure 3 . Figure 3. Membership function of a parametric variable. Figure 3 . Figure 3. Membership function of a parametric variable. ( 1 ) Calculate the arithmetic mean of the trapezoidal fuzzy numbers as ∼ Figure 4 . Figure 4. Schematic diagram of level evaluation of failure occurrence.By combining the fuzzy number of the fuzzy rating evaluated by the expert group with the membership function of each attribute rating, the membership degree corresponding to the highest point ( ) is obtained, and finally, the fuzzy parameter variable set of the corresponding attribute evaluation is obtained.According to this method, the fuzzy set of each risk attribute parameter can be obtained, respectively.As shown in Figure4, the dotted line represents the trapezoidal fuzzy number obtained after gathering Figure 4 . Figure 4. Schematic diagram of level evaluation of failure occurrence. , the dotted line represents the trapezoidal fuzzy number obtained after gathering expert opinions.The fuzzy number intersects with the figure O 1 , O 2 , O 3 , O 4 , O 5 , and the abscissas of the highest intersection point are L, M, 1, N and K, respectively.The fuzzy set for the occurrence probability level O formed by this is: Figure 6 . Figure 6.Results of risk assessment of FM1 using Netica software. ) = 17.2% × 1 + 19.4% × 25 + 14.5 × 50 + 27.4% × 75 + 21.4% × 100 = 54.222.Similarly, the values of other risk factors' CI can be obtained, and the results obtained according to the constructed risk assessment model are shown in Table13.In the bauxite maritime supply chain, the hazard level of risk factors is ranked in order > > > > > > .It can be analyzed from the CI value that "pirate or terrorist attacks" and "worker riots" are the most harmful, and they are the key risk factors affecting the reliability of bauxite maritime logistics links. Figure 6 . Figure 6.Results of risk assessment of FM1 using Netica software. Table 1 . Experts' knowledge and experience. Table 3 . Probability of risk O fuzzy rating. Table 4 . Fuzzy rating of risk detection degree D. Table 5 . Time delay ST fuzzy rating. Table 6 . Extra cost SC fuzzy rating. Table 7 . Safety and security loss SF fuzzy assessment level. Table 7 . Safety and security loss SF fuzzy assessment level. Table 8 . List of failure modes of bauxite maritime logistics. Table 9 . Fuzzy rule base based on confidence structure. Table 9 . Fuzzy rule base based on confidence structure. Table 13 . Failure mode risk assessment result. Table 13 . Failure mode risk assessment result. Table 14 . Sensitivity analysis of different combinations of risk parameters. Table 14 . Sensitivity analysis of different combinations of risk parameters.
2023-04-02T15:36:59.372Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "494590d15941a2d4967e78317854a7c20d126942", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/11/4/755/pdf?version=1680249639", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9f2484975740930f2ba510957f7b7df6ba83c737", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
225174130
pes2o/s2orc
v3-fos-license
Further Validation of Slovak Big Five Inventory–2: Six-Months Test-Retest Stability and Predictive Power The current study focuses on exploring the 6-month test-retest stability of the Slovak second version of the Big Five Inventory (BFI-2) and its predictive power for subjective and psychological well-being, on value-focused behavior and everyday behavior. The sample consisted of 414 adult Slovak participants, who reported on their personalities using the BFI-2 on the first occasion, and then again circa 6 months later, along with well-being and behavior self-report measures focused on the past 6 months. The results showed a strong test-retest stability of the Slovak BFI-2’s domains and facets. The Slovak BFI-2 also showed the expected pattern of well-being predictions with Extraversion and Negative Emotionality domains as the strongest predictors. Furthermore, meaningful trait–behavior links of the Slovak BFI-2 were discovered. Overall, our results contribute to the robust international knowledge base regarding stability, predictive power and ecological validity of the Big Five personality factors. Introduction Recently, Soto and John (2017) introduced a new personality scale, Big Five Inventory-2 (BFI-2), which is a revised and extended version of the BFI (John, Srivastava, 1999). Similarly to BFI, the BFI-2 has key characteristics -focus, clarity, and brevity. It captures prototypical characteristics of each Big Five domain, using easy statements and it is short enough to be completed within 10 minutes. It consists of 5 domains and 3 facets per domain, namely Extraversion (Sociability, Assertiveness, Energy Level), Agreeableness (Compassion, Respectfulness, Trust), Conscientiousness (Organization, Productiveness, Responsibility), Negative Emotionality (Anxiety, Depression, Emotional Volatility) and Open-Mindedness (Intellectual Curiosity, Aesthetic Sensitivity, Creative Imagination). It provides breadth and efficiency and high bandwidth on the do-main level and descriptive and predictive precision of high fidelity on the facets level (Soto & John, 2017). In the recent past, this measurement was adopted, among others, in German (Danner et al., 2019), Dutch (Denissen, Geenen, Soto, John, & van Aken, 2020), Russian (Shchebetenko, Kalugin, Mishkevich, Soto, & John, 2020) as well as in the Slovak language (Halama, Kohút, . The Slovak BFI-2 adaptation shows very good internal consistency on the domain level and, from lower to higher variation of internal consistency, on the facet level, which is acceptable due to the number of items. The Slovak BFI-2 retains the intended structure at the domain and facet levels. The principal component analysis of facets recovered the intended BFI-2 structure. The hierarchical structure of the Slovak BFI-2 is similar in robustness to the original English version. The Slovak BFI-2 was validated by association with gender and age, where it confirmed similar differences as those obtained in the previous Big Five research and original English version. It was further validated by association with the NEO-FFI and TIPI, where it showed good convergence. Finally, the validation was done using association with selected well-being measures and it shows a meaningful pattern of associations at both domain and facet levels (Halama et al., 2020). Further examination of the Slovak BFI-2 is needed to ensure its capacity to predict real-life consequences and to serve as a useful tool for measurement of personality traits. Soto and John (2017) verified 2-month test-retest stability, where the average 2-month stability was .76 at the domain level and .73 at the facet level, thus showing clear evidence of factorial validity. Test-retest stability was done on other Big Five instruments. For example, 2-month test-retest validation of Italian BFI shows good stability, with all correlations be-ing higher than .75 (Fossati, Borroni, Marchione, & Maffei, 2011), and 6-week test-retest in a German sample of BFI shows average correlations of .78 (Rammstedt & Danner, 2017). The meta-analysis of test-retest stability of different Big Five measures estimated median aggregated scores for Big Five traits at .816. The most dependable scores were found for Extraversion, while less dependable were identified for Agreeableness (Gnambs, 2014). As the Big Five measures appear to have good test-retest stability, we want to confirm and report the 6-month test-retest stability of Slovak BFI-2. Big Five domains have been robustly connected to well-being variables, both subjective and psychological ones. The relationship between psychological well-being (PWB) and personality is at a moderate level (Grant, Langan-Fox, & Anglim, 2009). The highest correlations are shown between general PWB and Neuroticism, Conscientiousness and Extraversion (Hicks & Mehta, 2018). Psychological and subjective well-being correlates positively with Extraversion and Conscientiousness and negatively with Neuroticism (Reyes, Shmotkin, & Ryff, 2002). These results were confirmed by Kokko, Tolvanen, and Pulkkinen (2013), who stated a high negative relationship with Neuroticism and a positive one with Extraversion. They also identified the relationship between PWB and Conscientiousness, Openness to experience and Agreeableness. All the factors of Big Five personality together explain 68-74% of variance of Environmental mastery; 63-66% of Self-acceptation; 55-62% of Purpose in life; 50-73% of Positive relations; 51-56% of Personal growth and 36-51% of Autonomy (Anglim & Grant, 2016;Sun, Kaufman, & Smillie, 2018). Specifically, positive relations were predicted by Extraversion and Agreeableness in a positive way and by Neuroticism in a negative way. Autonomy was negatively predicted by Neuroticism and Agreeableness and positively by Openness to experience. Environmental mastery was negatively predicted by Neuroticism and by Conscientiousness and Extraversion in a positive way. Personal growth was positively predicted by Openness to experience, Conscientiousness, and Extraversion. Purpose in life was mainly predicted by Conscientiousness and Extraversion and, to a lower extent, positively by Openness to experience and negatively by Neuroticism. Self-acceptation was mainly predicted by Neuroticism in a negative way and by Extraversion and Conscientiousness in a positive way (Anglim & Grant, 2016). The cognitive factor of subjective well-being -satisfaction with life shows strongest associations with Neuroticism in a negative way and with Extraversion in a positive one. Smaller correlations were also found with Agreeableness and Conscientiousness (Stolarski & Matthews, 2016;Balgiu, 2019). Extraversion and Neuroticism were found to be strong predictors of life satisfaction, but the depression scale was sufficient for predicting the facets of Neuroticism (Schimmack et al., 2004). The great amount of confirmed associations among Big Five and well-being domains allows us to use this construct for convergent validation of the BFI-2. In our validation process, we were inspired by Soto and John (2017) in their focus on behavioral measures, based on Schwartz's theory of value system. They found that each behavioral criterion was significantly predicted by at least one domain and facet. Furthermore, each criterion was associated with distinctive and conceptually meaningful set of predictors. For example, benevolent behavior was predicted most strongly by Agreeableness, namely by Compassion and Trust. Hedonistic behavior was predicted most strongly by low level of Conscientiousness, specifically low Productiveness and Responsibility. Self-directed behavior was mainly predicted by Open-mindedness, and, on the facet level, by Intellectual curiosity and Creative imagination. In order to expand the knowledge on the Big Five personality traits in Slovakia and to inform about ecological validity of the BFI-2, we included the investigation of how the Big Five traits predict common or "daily" behavioral acts in this study. The item selection was inspired by Chapman and Goldberg (2017). They found the association of Extraversion with drinking in a bar, going running or planning a party. For Agreeableness, they found correlations with behavior that benefits others, like ironing, washing dishes and so on. For Emotional stability, the low levels were associated with taking tranquilizing pills, drinking alcohol or having more nightmares. Open-mindedness correlated, for example, with daydreaming, meditating, attending art-exhibitions or trying something completely new. To sum up, we focused on 3 areas that have not yet been investigated for Slovak BFI-2 either specifically or more generally in the Big Five approach to personality in Slovakia. The first goal was aimed at the question of how stable the Big Five factors in Slovakia were. To answer this, we investigated the test-retest stability of BFI-2 domains and facets in a 6-month period. The second goal focused on the predictive power of Slovak BFI-2 for subjective and psychological well-being and behavioral measures of values in a 6-month period. This allowed us to compare our results with other studies focused on this area and to further validate the Slovak BFI-2. The third goal was aimed at expanding the Slovak knowledge of Big Five factors that was also supposed to provide us with a perspective in relation to the ecological validity of the Slovak BFI-2. In this regard, we inspected whether the BFI-2 domains could predict some of the mundane or "everyday" behavioral acts connected to Big Five factors in a 6-month period. Sample The sample for this study was collected on two separate occasions. The first data collection was in late 2018 and the second was circa 6 months later. The participants were recruited through an online panel of research agency. Every participant agreed through the informed consent. Only participants that passed multiple attention check questions remained in the final sample. The first sample was collected for the purpose of validation of Slovak short and extra short BFI-2 versions (Kohút, Halama, Soto, & John, submitted). This sample consisted of 801 participants, who successfully completed the survey. The research agency then randomly invited circa half of the sample for the second occasion. The final sample used in this study consists of 414 adults, aged between 18 and 75 years (M = 46.23, SD = 14.36). 239 (57.7%) were male with a mean age of 45.03 (SD = 13.87) and 175 (42.3%) were female with a mean age of 47.86 (SD = 14.89). Participants were compensated for their participation by small credits that could be exchanged for different products. During the second occasion, we assessed participants' subjective and psychological well-being, on value-oriented behavior and some of their everyday behavior in the last 6 months. Subjective well-being was assessed using Satisfaction with Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin;1985), which is a 5-item scale focused on measuring global satisfaction with life using a 7-point scale. Items were translated to Slovak from the original version by two of the authors. Cronbach's alpha for this scale was .90. Ryff's 42-item Psychological Wellbeing Scale (Ryff et al., 2010;Ryff, 1989) was used to assess the participants' psychological well-being. This scale focuses on six areas of psychological well-being using seven 7-point scales for each area. Items were translated to Slovak from the original version by two of the authors. Internal consistency was adequate: .71 for Autonomy subscale, .62 for Environmental mastery subscale, .69 for Personal growth subscale, .77 for Positive relations with others subscale, .70 for Purpose in life subscale and .82 for Self-acceptance subscale. The use of behavioral self-reports was inspired by the original BFI-2 adaptation and validation study by Soto and John (2017). We used 40 items selected from Bardi and 250 Studia Psychologica, Vol. 62, No.3, 2020, 246-258 Schwartz's (2003) 80-item scale to assess behavior oriented on conformity (e.g., "Avoid arguments so that others won't be angry with me"), tradition (e.g., "Show modesty with regard to my achievements and talents"), benevolence (e.g., "Do my friends and family favours without being asked"), power (e.g., "Manipulate others to get what I want"), universalism (e.g., "Show my objections to prejudice -e.g., against racial groups, the homeless"), hedonism (e.g., "Indulge myself by buying things that I don't really need"), security (e.g., "Avoid dangerous places and neighbourhoods"), stimulation (e.g., "Look for new people to meet"), achievement (e.g., "Take on many commitments"), and self-direction (e.g., "Make my own decisions"). These items were selected and translated into Slovak by two of the authors. We asked participants to judge how frequently they behaved in the way described in these items in the last 6 months, using a 5-point scale ranging from I have never behaved this way to I behaved this way every time I had a chance. Internal consistency of these scales ranged from low to adequate (M = .51), bearing in mind the low number of items (4 per scale) and width of the measured constructs, varying between .19 (Conformity) and .76 (Power). To assess the frequency of various everyday behaviors, we asked the participants to indicate how many times they had engaged in these activities in the last 6 months using the scale ranging from never and continuing with once or twice, up to 5 times, 6 to 10 times, 11 to 15 times and more than 15 times. We asked about 33 activities including culture (e.g., "visiting exhibition or gallery"), relaxing (e.g. "meditating"), sports ("went running") and others (e.g., "crying", "tried something new", "forgot something important"). These were inspired by the signature behaviors of the Big Five study by Chapman and Goldberg (2017). Table 1 presents descriptive statistics, 6-month test-retest stability, gender differences of BFI-2 domains and facets. It is apparent that, the test-retest stability of domains is quite good, ranging between .76 (58% of the explained variance) and .83 (69%). For the facets, stability varied from .59 (Compassion) to .82 (Sociability), averaging at .70, indicating satisfactory stability over the 6-month period. Statistically significant change was detected in 3 facets, but this difference is negligible to small in terms of the effect size. Comparing females and males, the stability is quite similar for both genders, although overall slightly better for males. Significant gender differences were found in Agreeableness and Negative Emotionality domains, in which females scored higher. These differences are small to medium. Moreover, evaluating stability, the gender differences in domains and facets are fairly similar on both time points. These gender differences differed only by .13 at most, averaging at .05 in absolute value of Cohen's d. Results To explore the power of the BFI-2 domains to predict satisfaction with life, psychological well-being and value-oriented behavior, we tested hierarchical linear regression models, which consisted of gender and age in the first step and all 5 domains in the second step. All independent variables were entered simultaneously. Exploring the predictive power of the BFI-2 facets, we entered gender and age in the first step and then entered 15 facets using the forward selection method with p < .05 inclusion criterion. Gender and age variables were added to these models to control for shared covariance between constructs and to better reflect the explained variance by BFI-2 not explained by gender or age. All models were checked and they passed assumptions of linear regression (Durbin-Watson, collinearity statistics, multidimensional outliers, Field, 2018). Results of these analyses are presented in Table 2, which also contains correlation coefficients between these variables. As seen in this Table, the BFI-2 domains predicted 20% of variance in case of satisfaction with life, between 9 to 23% for psychological well-being (M = 16%), and 1 to 23% for behavioral self-report scales (M = 12%). These values are also comparable in facets, 22% in case of satisfaction with life, 9 to 24% for psychological well-being (M = 16%) and between 2 and 22% for behavioral self-reports (M = 12%). To accomplish the last goal of our studyexplore the BFI-2's predictive power for various everyday behavior activities, we predicted each of 33 behaviors and activities by BFI-2 domains using the ordinal logistic regression and enter method. This method of analysis was used because the dependent variables (behaviors and activities) were measured on an ordinal level, as described in the method section. All models included gender and age to control. In Table 3, we report which behaviors were significantly predicted by each of the BFI-2 domains under p < .01 and .05 levels. Table 2 The 6-months predictive power of BFI-2 for well-being and behavioral self-reports .24 Table 2 continues Table 2 continued Note. For domain predictors, the absolute values of standardized Betas higher than .11 are significant at p < 0.05. For facet predictors, the absolute values of standardized Betas higher than .10 are significant at p < 0.05. The absolute values of correlations coefficients higher than .10 are significant at p < 0.05. All regression models contained gender and age variables. Δ adj. R 2 for BFI-2 domains -change of adjusted R 2 accounted for BFI-2 domains against model with age and gender, Δ adj. R 2 for facets -change of adjusted R 2 accounted for selected BFI-2 facets against model with age and gender. SWLS -Satisfaction With Life Scale; E -Extraversion, A -Agreeableness, C -Conscientiousness, N -Negative Emotionality, O -Open-Mindedness. Discussion The current study focused on 6-month test-retest stability and predictive power of the Slovak BFI-2. In relation to stability, we discovered high test-retest correlations for all domains and most of the facets. The highest test-retest stability was found for the Conscientiousness domain and the lowest for the Agreeableness domain, although all of these were .76 or higher. The stability for facets was generally slightly lower, the lowest was found for the Compassion facet, the highest for the Table 3 Domain predictors of various behaviors and activities during the last 6 months Note. Behaviors and activities significantly predicted by domain are in domain columns, all of which are significant at p < .05 and marked * where p < .01. Positive correlation is indicated by + and negative by -.Values in brackets are for odds ratio of predictor in that column. All predictions were controlled for gender and age. Sociability facet and .70 in general. The good level of stability also holds for females and males separately. Our results confirmed the findings reported by Soto and John (2017) for 2-month stability of the original BFI-2 and suggested that the Slovak BFI-2 had good test-retest stability. Our results also contribute to the research of other sources of error in personality measures, such as the transient error or developmental changes (see Gnambs, 2014). The acceptable level of test-retest correlation after a 6-month period suggests that the BFI-2 has good resistance to this kind of effect and can be a good option for research not only in studies with simple correlation design, but also for the studies using longitudinal approach. Our next goal was to explore the predictive power of the Slovak BFI-2 focusing on 6-month predictions of well-being and behavioral measures. As expected, well-being measures were positively predicted by Extraversion and negatively by Negative Emotionality. These results clearly replicated the results of previous studies such those by Hicks and Mehta (2018), Reyes, Shmotkin, and Ryff (2002); however, the strong effect of Conscientiousness was not replicated in our study. Nevertheless, the effect of Extraversion and Negative Emotionality on well-being remains robust in our study. This effect has been recognized for a long time in personality research (e.g., Costa & McCrae, 1980) and it is usually attributed to direct outcomes of these dispositions: positive affect and negative affect influencing broad range of well-being variables. The remaining three domains predicted only some well-being variables; however, all of them corresponded to the theoretical assumptions and previous findings (e.g., Anglim & Grant, 2016) and displayed meaningful pattern of prediction. Open-Mindedness predicted personal growth, Agreeableness predicted positive relations and Conscientiousness pre-dicted purpose in life. As far as facets are concerned, Depression was the most frequent predictor of well-being variables, which, again, is meaningful because the items of this facet are strongly related to well-being. The relationship between behavior connected to personal values and BFI-2 domains shows a meaningful pattern, too. The highest proportion of variance explained by BFI-2 domains was identified in relation to benevolence -being kind to others, which was predicted mostly by Agreeableness. The second highest proportion explained by BFI-2 domains was in power, reflecting the tendency of having control of others. This was mainly positively predicted by the Extraversion domain (especially facet Assertiveness), and negatively by Agreeableness (Respectfulness facet). Looking for stimulation was positively predicted by the domains of Extraversion and Open-Mindedness. Extraversion also positively predicted working really hard for achievement and negatively predicted behavior connected to conformity. In addition, Agreeableness also positively predicted conformity and universalism, as behavior focused on general goodness. Conscientiousness negatively predicted mainly hedonistic behavior. Although Negative Emotionality was not a notable predictor for behavioral measures, it positively predicted behavior focused on gaining power and negatively predicted self-direction. On the other hand, the Open-Mindedness domain also positively predicted self-direction, universalism, achievement and, quite surprisingly, even behavior focused on following tradition. With the exception of the last mentioned, these results are understandable and most of them are consistent with the results reported by Soto and John (2017), especially in relation to conformity, benevolence, power, hedonism and self-direction values. Differences might have been caused by the effect of culture, which is 256 Studia Psychologica, Vol. 62, No.3, 2020, 246-258 well-described in personality research (e.g., Church et al., 2008). Our last goal was to explore the ecological validity of the Slovak BFI-2 by looking at the predictions of everyday behavior. Inspired by signature behaviors of Big Five domains reported by Chapman and Goldberg (2017), we selected 33 behavioral acts and asked participants about their approximate frequency in the last six months. This way, we tried to connect BFI-2 domains to the frequency of these various behaviors during the next 6 months. Results have shown interesting but expected patterns of these associations. Extraversion positively predicted behavior connected to sociability, assertiveness or social confidence and personal energy, such as singing in public, visiting a sport event or meeting with friends, being angry and arguing with someone or forgetting something important. Agreeableness predicted the lowest number of acts and it is connected positively to sensitive and caring behavior, such as cleaning household or crying and negatively to assertive or rough behavior, namely arguing. Conscientiousness negatively predicted disorganized or carefree behavior, such as forgetting important things, slacking or taking a day-off and positively predicted just thorough cleaning of the house. Negative Emotionality predicted the highest number of acts, positively connected mainly to the frequency of arguing and being angry, crying, daydreaming, forgetting important things, or using medications for calming or headache. It also negatively predicted organizing of social gatherings, relaxing or doing sports. Lastly, Open-Mindedness positively predicted exploring of the world, such as trying new things, hiking or learning and being creative. Overall, the results are consistent with the general definition of the individual traits and the theoretically expected behavior (Soto, John, 2017). However, our study not only confirmed that the Slovak BFI-2 had good predictive validity, but also contributed to many studies focused on the personality implications of everyday life (e.g., Mehl, Gosling, & Pennebaker, 2006;Fleeson & Gallagher, 2009). Our results confirmed the existence of meaningful trait-behavior links in Slovak environment and provided solid evidence for the relevance of Big Five personality traits in understanding everyday behavior. Limitations and Further Directions The main limitation of this study is the self-report nature of the behavioral measures. We did not use any advanced method of behavior measurement, such as daily diary or peer report, and our results could be biased by social desirability or other personal biases. Moreover, due to our effort to keep the length of the survey reasonably short, we used a limited number of behavioral self-report items. We did not fully report the results of these analyses or evaluated the power of prediction and we presented it in a shorter form, because the other option would have made the study inappropriately long and overfull. Finally, all our measures, with the exception of the BFI-2, were not psychometrically adapted in the Slovak context. As authors, we tried to carefully translate the items of these measures as well as critically evaluate the content validity; however, we have no further evidence regarding psychometric properties of the Slovak version of these instruments. Future studies should focus on a broader range of daily behaviors connected to the Big Five factors in the Slovak context, which were not covered by this study. These studies should also overcome limitations related to possible self-report bias through alternative methods of behavior measurement such us diary studies, peer rating, automatic behavior recording, etc.
2020-10-28T18:55:35.951Z
2020-10-06T00:00:00.000
{ "year": 2020, "sha1": "af18325ad4de004a1625699082fccbb8ff7bbfe8", "oa_license": "CCBY", "oa_url": "https://www.studiapsychologica.com/uploads/Kohut_SP_3_vol.62_2020_pp.246-258.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b649663e6beab3ed273c2a95211e1cb99d6de642", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
122157967
pes2o/s2orc
v3-fos-license
Kam theory, Lindstedt series and the stability of the upside-down pendulum We consider the planar pendulum with support point oscillating in the vertical direction; the upside-down position of the pendulum corresponds to an equilibrium point for the projection of the motion on the pendulum phase space. By using the Lindstedt series method recently developed in literature starting from the pioneering work by Eliasson, we show that such an equilibrium point is stable for a full measure subset of the stability region of the linearized system inside the two-dimensional space of parameters, by proving the persistence of invariant KAM tori for the two-dimensional Hamiltonian system describing the model. 1. Introduction 1.1. The state of the art. The upside-down pendulum with the support point oscillating with a frequency ω large enough has been extensively studied in literature as a simple model exhibiting a quite nontrivial behaviour; see [5] and the references quoted therein. The stability of the upside-down position can be proven by the averaging method (see for instance [19], Ch. 9): the result is that if the support point oscillates fast enough then the upward equilibrium position becomes stable. However such a kind of analysis is not completely rigorous both because no explicit control on the corrections can be obtained and because it can lead to incorrect results, as already pointed out in [8] and [6]. In fact the averaging method approach can be followed also for studying the stability of the downward position, and the result one finds in doing so is that such a position is always stable -provided that ω is large enough to make it possible to apply the averaging, say ω > ω 1 , for some ω 1 -, a result obviously unacceptable as by varying ω above ω 1 one can lose even the linear stability (as it follows from Mathieu's equation theory). A rather complete review on the averaging method can be found in [2]. A rigorous proof of stability has been given for the linearized system in [1], where also the case of several pendula has been considered. In the latter case the linearized system can be written (by a diagonalization procedure) as a system of several uncoupled Mathieu's equations. So both for a single pendulum and for more than one pendulum, the theory of Mathieu's equation applies. In particular for a single pendulum the physical parameters describing the system have to be such that the parameters (a, q) appearing in Mathieu's equationẍ + (a + 2q cos 2τ )x = 0 lay inside the stability region corresponding to negative values of a (see [1]): if the amplitude of the oscillation of the support point is small enough (with respect to the length of the pendulum), then one has stability if the frequency ω is above a threshold value ω 0 (see below), and the same value predicted by the averaging method is found for ω 0 . Stability and KAM theory. What is missing in the literature is a rigorous discussion of the full system (not just the linearized one). In this paper we achieve such a task, by studying the full system by means of perturbation theory techniques. In [2] it is suggested that KAM methods are necessary, but there is no explicit discussion; we think that it can be of interest to discuss in detail the problem by means of the Lindstedt series method recently developed in literature (see [4] for a list of references) both as an application of the general theory presented in [4] and as an occasion to improve the bounds in a case in which additional properties come into play. The method, based on a tree representation of the quasi-periodic solutions describing the KAM invariant tori, was originally introduced in [12], hence revisited in [14] and [11], and adapted to the case of isochronous systems in [4]: it is on the latter that we shall rely. Since the linearized system can be written as a two-dimensional integrable Hamiltonian system, the full system becomes a perturbation of an integrable one: so KAM theory applies. Then we can prove that a large quantity of invariant tori persists under the perturbations, and the nearer the initial data (θ(0),θ(0)) are to the upside-down position (π, 0) the nearer the curves obtained by projection of the tori on the pendulum phase space (θ,θ) are to such a position: this proves the stability of the upside-down position for the pendulum. Note that in applying the above argument it is a fundamental fact that the system is a two-dimensional one, so that the existence of the tori yield a topological obstruction for trajectories starting inside a torus to cross it and move far from it. So such a result cannot be used in order to study the stability of the upside-down position in the case of more than one pendulum: as a matter of fact, in the latter case, it is even likely that Arnol'd diffusion can occur and, as a consequence, the position is not stable at all. Contents. As far as the proof is concerned, in such a note we simply show that the system can be written in terms of action-angle variables, and gives rise to a perturbed isochronous system; thus we can refer to [4] for the proof of persistence of KAM invariant tori within the framework of the Lindstedt series approach. Note that, once verified that for our system the conditions under which the KAM theorem for perturbations of isochronous systems apply, one could proceed also by means of the usual KAM techniques. We prefer to use the Lindstedt series method of [4] for the reasons illustrated above, even if the theorem 1.4 of [4], briefly recalled in Section 2, could be proved also through other approaches to KAM theory. In Section 2 we introduce the model which we are interested in, and we show how the analysis of [4] can be applied to it; this yields some Diophantine-type condition on the oscillation frequency of the support point In Appendix A3 we explicitly check that all the hypotheses under which the results in [4] Consider a pendulum with support point P = (x P , y P ) oscillating in the vertical direction with the law y P (t) = b cos ωt (and x P (t) ≡ 0). The system is described by the Lagrangian (see, for instance [18] or [19]) which, by the change of variable can be written as The corresponding (non-autonomous) Hamiltonian is where x 1 = x and y 1 is the momentum conjugated to x 1 . We are interested in the stability of the position (θ,θ) = (π, 0), i.e. (x,ẋ) = (0, 0), for ω large enough (see [19] and [1]), so that we can define and write (2.4), divided by δ 2 , as where the last sum is obtained by Taylor expanding the cosine function and disappears for δ = 0. We can consider the autonomous Hamiltonian with (I 2 , α 2 ) conjugated variables. By a (canonical) rescaling we can put (2.7) into the form where, for notational simplicity, we still denote by (p 1 , q 1 ) the new variables. Let us consider the Hamiltonian obtained from (2.9) by putting δ = 0: one easily realizes that the corresponding equation for q 1 is Mathieu's equation so that the solution is of the form for some ρ (and for a particular choice of the initial phase) and with p 0 (τ ) a periodic function of period π; µ is a real number in (0, 1), for ω large enough (see [3], [17] and [9]; see also Appendix A1 for the notations and for a review of some basic properties of Mathieu's equation which will be used below). In (2.11) we can choose ρ so that ). It is possible to pass to action-angle variables through a canonical transformation see [13] and [10] (see also Appendix A2 below), with α 2 the same in both sides. Then the Hamiltonian (2.9) at δ = 0 becomes which is an isochronous Hamiltonian, while the perturbation (2.14) in (2.9) can be written in terms of the variables (A, α), and gives a function analytic in its arguments (in a suitable domain). By setting δ 2 = ε and defining where the functions F p (α, A 1 ) are analytic in (α, A 1 ) uniformly for all p. Call A the domain of analyticity in A 1 , and, for any A ∈ A, let us denote by B ρ (A) the ball of radius ρ and center A; the analyticity in α is in a strip Σ κ = {α : |Im α j | < κ, j = 1, 2}. The system (2.16) represents two harmonic oscillators interacting through a potential depending only on the angles and on the action variable A 1 : the latter condition, in particular, implies that one oscillator is simply a clock, i.e. it rotates with fixed frequency. The corresponding equations of motion are so that one sees that α 2 (t) = t. 2.2. Stability of the upside-down pendulum. We want to study the persistence of tori near the origin. Note that, for any value of A, the origin is recovered by setting ε = δ 2 = 0 (see (2.5)). On the other hand for ε = 0 the scaled Hamiltonian (2.16) reduces to ω · A, so that it admits invariant tori for any value A 0 = (A 01 , A 02 ) of the action variable A, all run with the same rotation vector ω: such tori are defined by where ω is fixed and A 0 ∈ R 2 . Note that the Hamiltonian (2.16) is of the form considered in [4], so that we can apply the theorem 1.4 of [4]; for simplicity we write here the complete statement (adapted to the notations used above). In the original variables, along the motion corresponding to the considered torus T , one has in the sense that, for initial data (x 1 (0), y 1 (0)) with x 2 1 (0) + y 2 1 (0) = δ 2 , one has |x 2 1 (t) + y 2 1 (t) − δ 2 | < Cδ 4 , for some constant C; in particular for ε small enough along the motion one has A 1 (t) = 0, so that the trajectory never crosses the origin (which can be outside the analyticity domain of the conjugating functions). So we have a closed curve surrounding the origin and which can be made arbitrarily near to it. As all trajectories starting from initial data contained inside the torus described by (2.26) have to remain inside, we have that their projections onto the plane (x 1 , y 1 ) have to remain inside the closed curve C. Therefore we obtain the stability of the origin for the motion of the variables (x 1 , y 1 ). Extension to vectors satisfying only a non-resonance condition. Note that the theorem 1.4 of [4] requires ω to be Diophantine. In the study of stability of elliptic equilibrium points it is well known (see for instance [16]) that a Diophantine condition on the unperturbed frequencies is not necessary, and in fact it can be relaxed (by using that the perturbation depends on A and ε in a precise way). A mechanism of this kind works also in the present case, and, by using the special form of the perturbation (see (2.15)÷(2.18)), one can show that also for the vertically driven pendulum the conditions on the rotation vectors can be weakened. This could certainly be done within the formalism of the classical approaches to KAM theory, but here we prefer to discuss such an extension of the proof within the tree formalism introduced in [4]. The analysis is performed in Appendix A4, and gives the following result. 2.4. Remark. Note that the Diophantine condition imposed on ω in the theorem 1.4 of [4] (which allows a full measure set of rotation vectors) can in fact be improved: for weaker conditions holding in general we refer to [20], while a case in which optimal conditions can be explicitly worked out can be found in [7]. Nevertheless in general, that is without making any assumptions on the perturbation, no condition of the kind of (2.28), or whatever else, can be obtained: the possibility of imposing only a non-resonance condition like (2.28) can arise only if the perturbation is of some special form, as it is the case for the elliptic equilibrium points and for the vertically driven pendulum studied in the present paper. In this sense, the results formulated in [4] are as general as in any other KAM approach to the study of isochronous systems. where a < 0 and q ∈ R. The solutions of (A1.1) are of the form where µ 0 ∈ C and p 0 is π-periodic. The regions of the plane (a, q) such that µ 0 ∈ R are called stability regions: the corresponding solutions are quasi-periodic, hence (in particular) bounded. A2.2. Second change of coordinates. Consider the change of coordinates Then (A2.4) is an analytic change of coordinates, such that the change of coordinates C 2 • C 1 is analytic and canonical, and, in the new variables, the Hamiltonian (2.9) for δ = 0 becomes ω · A ≡ µA 1 + A 2 . (A2.5) For the proof see [10] and [13], in particular the theorem 2 in Section 1.3 of [10]. Moreover the second line of (2.9) can be written as a function which does not depend on A 2 , and such that the dependence of Φ p (α, A 1 ) on α 1 involves only harmonics with |ν 1 | ≤ 2p (simply combine the definitions of the two changes of coordinates C 1 and C 2 ). A4. Non-resonance conditions on the rotation vectors A4.1. Non-resonant condition on the rotation vector. The result in Section 2 can be improved in the following way. Let ω be such that Then it is still possible to perform the analysis of [4] with some minor changes. We refer to the formalism introduced in [4]: we use the same notations used there, by assuming a full knowledge of that paper by the reader, with no further comments henceforth. Assume that the rotation vector ω satisfies the non-resonance condition (A1.1). This means that there exists a constant α such that so that, for ε small enough then one can prove the following bound. In general note that k = 1 implies V = 1 so that the above argument yields N α (θ) = 0 for θ of order 1. Given V > 1, assuming (A4.6) to hold for V < V we can show that it holds also for V . Let θ be a tree of order k with V nodes and let v 0 be the node which the root line 0 ≡ v0 of θ exits from. Call θ 1 , . . . , θ m the subtrees of order ≥ 2 entering the node v 0 and denote by k 1 , . . . , k m and V 1 , . . . , V m , respectively, the orders and the numbers of nodes of such subtrees. Call also k and V the sum of the orders and of the numbers of nodes, respectively of the subtrees of order 1 entering v 0 (of course k = V ). One has m ≥ 0, k 1 +. . .+k m +k v0 +k = k and V 1 +. . .+V m +1+V = V . A4.4. Improvement of the bound on the radius of convergence. With respect to [4] we can modify the multiscale decomposition by defining the scales starting from n such that C 0 2 n < α/8 ≤ C 0 2 n+1 . Then one can reason as in [15] in order to show that, by using the lemma A4.2, instead of a factor C −k 0 one has a factor (8α −1 ) k C −3k/5 0 (here the exponent k for 8α −1 is a bound on the number of lines which do not contribute to N α (θ)). This implies that the radius of convergence ε 0 of the series defining the functions h, H, η can be bounded by ε 0 = E C 3/5 0 (instead of ε 0 = EC 0 as in Section 2), for some constant E depending on α but not on C 0 . A4.5. Persistence of KAM tori. Under the condition (A4.1) on ω the analysis of [4] can be repeated for the Hamiltonian (2.16) with f as in (2.17). By the discussion in Section A4.4 one finds ε 0 ≡ E C 3/5 0 , so that one can choose ε * ∈ [0, ε 0 ] and repeat the same argument as in Section 5 of [4]: the main difference is that one can fix the interval I of size |I| = aC 3/5 0 and, as C 3/5 0 C 0 for C 0 small enough (if C is not small one can choose the constant b in the theorem 1.4 of [4] so that the same conclusions hold), one can easily prove that there are infinitely many µ 0 ∈ I such that the corresponding ω 0 verify the Diophantine condition (1.11) of [4].
2016-11-01T15:05:40.344Z
2002-12-01T00:00:00.000
{ "year": 2002, "sha1": "46ff7ffa4141a8b43970ad056728aff76177e881", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/dcds.2003.9.413", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "eb30a8c793c9a43d4dcef91e67e668d673a700bd", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
216512446
pes2o/s2orc
v3-fos-license
A Scoping Review Protocol to Map Empirical Evidence that Illuminates the Dark Side of Occupations Among Adults The objective of this review is to explore existing literature to identify, map, and synthesise past accounts of occupations that could be considered as constituting the dark side of occupation and which could, consequently, be identified and discussed as such. Presenting findings through use of a synthesis matrix, and formulating a descriptive account of the types of occupations (including their form, function, meaning, and contribution to identity and becoming) that constitute the dark side of occupation, is anticipated to assist with prioritising future collaborative research endeavours, as part of an intended programme of research. Specifically, the review questions are: i) What past accounts of occupations have been discussed or explored in the literature that would constitute falling under the conceptual ‘umbrella’ of the dark side of occupation? ii) What specific occupations that challenge the pervasive belief in the link between health and occupation have been discussed for the adult population, across all cultures? iii) Where do gaps of knowledge remain regarding the less explored occupations people subjectively experience, and which indicate the priority research areas that need to be explored from an occupational perspective? Background Occupations are all of the things that individuals do in their lives every day (Sundkvist & Zingmark, 2003) on their own and with others, such as their families and communities. Occupations are, therefore, all the things people need, want, and are expected to do (World Federation of Occupational Therapists, 2019) and are understood to impact upon, and be impacted by, our health and well-being. Occupational science is the academic study of human occupation (Zemke & Clark, 1996). From its inception in the late 1980s, occupational science was intended to provide an interdisciplinary theoretical and scientific perspective regarding humans as occupational Hence, initial validation of the research field and conceptual themes that emerged now requires further examination via research. Themes that align to the dark side of occupation are emerging in contemporary literature and exist in historical accounts, albeit under different and varying terminologies. Yet, to date, published accounts and research findings underpinning the dark side of occupation remain un-synthesised. To reach this point, there is a need to establish the absolute interdisciplinary relevance of the concept; the crucial step to doing so is to scope the literature to determine which occupations could be considered as constituting the dark side of occupation and which could, as a consequence of the scoping review, be identified and discussed as such. Therefore, this current scoping review is not only timely but also crucial for the development of the concept and subsequent related research endeavours. The rationale for scoping accounts of occupations of adults is their significant difference in lived experience, and in their subjective experience of occupations, compared to childhood occupations (0-18). For instance, play is regarded as a key occupation in childhood (Moore, 2018) and, therefore, is a significant focus in the literature regarding childhood occupations. The rationale for excluding adults living or residing in an institution, in-patient setting, or care facility is that there is evidence to show adults living in certain contextual environments experience and engage in occupations in specific (contextually-driven or dependent) ways. For instance, Cunningham and Slade (2019) explored the lived experience of homelessness amongst five men, and found their engagement in occupation, whilst sleeping rough and later when residing in a homeless hostel, was centred around survival. Likewise, other work has found occupational engagement to be greatly impacted upon by environments, such as regional secure units, where security demands are priority and limitations are in place (Morris, Cox, & Ward, 2016). Review objective/questions The objective of this review is to explore existing empirical evidence to identify, map, and synthesise past accounts of occupations that could be considered as constituting the dark side of occupation and which could, consequently, be identified and discussed as such. Presenting findings through use of a synthesis matrix, and formulating a descriptive account of the types of occupations (including their form, function, meaning, and contribution to identity and becoming) that constitute the dark side of occupation, is anticipated to assist with prioritising future collaborative research endeavours, as part of an intended programme of research. The three questions of this review are: i) What past accounts of occupations have been discussed or explored in the literature that would constitute falling under the conceptual 'umbrella' of the dark side of occupation? ii) What specific occupations that challenge the pervasive belief in the link between health and occupation have been discussed for the adult population, across all cultures? iii) Where do gaps of knowledge remain regarding the less explored occupations people subjectively experience, and which indicate the priority research areas that needs to be explored from an occupational perspective? Methods/Design Initially guided by the Joanna Briggs Institute (JBI) (Peters et al., 2015) methodology for a recommended format, a standard scoping review framework will be used to conduct the review by an interdisciplinary study team of a sociologist, a physiotherapist, and two occupational scientists and therapists. The latter two members will independently screen records for inclusion using the criteria outlined in this protocol. Inclusion criteria Eligibility criteria and methods of analysis have been determined a priori. Participants Where studies do include participants, this scoping review will consider articles that include adults aged 18 and over. As the main objective of this scoping review is to explore the dark side of occupation in adults, papers which relate to people under 18 years will be excluded. The term 'adult' will not be used as part of the search strategy as initial searches revealed how this significantly limits results. Where age is not stated, articles will still be considered if regarded to be a discussion relevant to any age of adult. Also to include all: sexes (refers to the biological, genetic and physiological processes that generally distinguish females from males) or genders, ethnic identities, educational backgrounds, socioeconomic backgrounds, sociocultural identities, health status, religious/spiritual beliefs. Concept The concept of interest is synthesising the literature that pertains to the dark side of occupation. Occupation itself is recognised in a broader sense than lay language would denote (as paid work), and can be defined as: "The experiences of humans which necessitate active engagement, have purpose and meaning, and are contextualized" (Molineux, 2017). This current review will consider all of the different ways, definitions, or classifications of occupation that have been stated in the four seminal pieces of work relating to the dark side of occupation; two of these are written solely by the concept's creator (Twinley, 2013;, one was co-authored by Twinley (2012), and one article was written by a group of authors (Kiepek et al., 2019) who have cited Twinley's work in their paper. The latter paper is similar in theme and content to Twinley's 2013 article, and closely related to the concept of the dark side of occupation. To illustrate, the search strategy includes those key words -as highlighted in bold here -listed in the first published definition of the dark side of occupation: "Occupations that remain unexplored -such as those that are health compromising, damaging, and deviant -and which therefore challenge the pervasive belief in a causal relationship between occupation and health" (Twinley, 2017). Appendix II further shows how the key words were identified from the aforementioned literature. Context Articles will be considered for inclusion in this review within the context of studies conducted with adults in any country. Due to this, the search strategy (Appendix I) reveals that context has been left open to include all settings. In addition, those articles where it is revealed the participants are living or residing in an institution (such as a psychiatric unit, prison, or addiction centre), in-patient setting, or residential care, including hostels and shelters, at the time of data collection will be excluded. Types of sources Sources to be considered are described as falling within professional literature: Professional literature: • Peer reviewed, published papers and doctoral theses, as per JBI (2013) • Studies that specifically focus on the subjective experience of occupation. • All methodological approaches (where stated their approach), and designs as listed under the JBI levels of evidence for meaningfulness (due to seeking description of lived experience of occupations and/or descriptive information regarding occupations). • Full-text papers published in English, in the last ten years (since 2009) in an effort to align with the emergence of relevant contemporary literature, and that which has been published around contemporary paradigms and developments. Sources that will not be considered: • This scoping review will not include books, book chapters, conference proceedings, or grey literature, which will allow us to map how the dark side of occupation is conceptualised and explored in original research and expert opinion pieces. Search strategy The search strategy will aim to find published literature. A three-step approach will be taken, in line with JBI recommended methodology (Peters et al., 2015). i) The initial step included a limited search of PubMed (MEDLINE) and CINAHL and analysis of the text words contained in the title and abstract, and of the index terms used to describe the article. This has informed the development of the search strategy -a revised example of which has been appended (Appendix I). To elucidate, the first three CINAHL searches generated: 1) 2,208,760, 2) 6,987, 3) 5,271 results, respectively (see Appendix III for search strategy). Because the initial searches generated an unmanageable number of results, the team returned to considering search terms and agreed to draw upon the four key pieces of literature from occupational therapy and science publications that have introduced the issue of occupations that have been left in the dark, silenced, and unexplored. These articles were referred to and key terms identified (Appendix II) which are the terms used in the final search strategy (Appendix I). Following this strategy, the first CINAHL search generated 1407 results. ii) Secondly, therefore, these terms will be used to search within the electronic databases identified below. It is accepted that an iterative approach has been needed in order to confirm the search strategy. iii) Thirdly, a search of reference lists in all included articles will then performed to search for any additional items. While an historical consideration of the concept would be interesting, the research team agreed that literature published in the last decade, since 2009, would be included as the content and findings are deemed to have more relevance to contemporary practice, education, and research. Only English language materials will be included as the author and research assistant do not have the resources for translation. A record will be kept of the number of papers excluded on the basis of language and reported in the PRISMA flow diagram. Information sources The databases to be searched include: CINAHL, MEDLINE (EBSCO host), AMED, Embase, PsycINFO, SocINDEX, Scopus and OTSeeker. All professions and disciplines will be included. That said, as the concept of the dark side of occupation stemmed from the identification of a gap in examining occupations that are not necessarily healthy (for instance), a specific search within the Journal of Occupational Science will be performed. Study selection Following the search, all identified citations will be imported into EndNote X8 (Clarivate Analytics, PA, USA). At this stage any duplicate citations will be removed. Then all citations will be imported into the free, online application, Rayyan (QCRI, Qatar Computing Research Institute, USA), to allow for title and abstract blind screening. The titles and abstracts of citations will be screened independently by the author and the research assistant (the two reviewers) and compared to the inclusion criteria for the review. Publications identified as potentially relevant will be retrieved in full and their citation details will be imported into Rayyan. The full text of these selected citations will be fully assessed against the inclusion criteria for review independently by the two reviewers. Any arising disagreements between the reviewers at each stage of the study selection process will be resolved through consensus, or through utilising the third or fourth author as a third reviewer to decide. Publications that do not meet the inclusion criteria will be excluded and reasons for their exclusion will be reported in the scoping review. The results of the search will be reported in full in the final report and they will be presented in a PRISMA flow diagram (Moher, Liberati, Tetzlaff, Altman, & The, 2009). Data extraction Results will be extracted through all data that is relevant and that informs the overarching research question and objectives. Tables will be used to chart data. The main results will be structured, based on the dark side of occupation concept that underpins the review, and supported by a descriptive summary. A data extraction tool has been developed specifically for this scoping review (Appendix IIII), and will be used to extract the relevant data from each paper. This follows more typical data extraction designs, and also utilises the work of occupational scientists who seek to understanding an occupation's form, function, meaning and contribution to identity and becoming. Indeed, this utilises two of the first approaches used in occupational science in order to study human occupation (Wilcock, 1999;Zemke & Clark, 1996). Presentation of Results The extracted data may be presented using, but not limited to: tables, diagrams, figures, citation maps, word clouds, concept network maps. This will include a synthesis matrix of those occupations that may (conceptually) be considered as the dark side of occupation, presented in the form of a data table. This has been developed for this scoping review (Appendix V), however it is anticipated that this will be further refined for use during the review process. In subsequent publications or presentations of the final scoping review, this synthesis matrix will be presented as a work in progress, rather than an absolute or strict and final synthesis. A narrative summary will accompany any tabulated and/or charted results and this will describe how the results relate to the review objectives. A discussion of the data and the significance of the findings will be given in the final report. Discussion The strengths of this proposed scoping review are that it presents an interesting topic and a concept that is gaining international, interdisciplinary attention (evidence by online impact and the forthcoming publication of a monograph of international perspectives regarding the dark side of occupation, edited by Twinley). In addition, this review is intended to make an important and unique contribution to the interdisciplinary occupational science research, as well as to related interdisciplinary practice such as occupational therapy. The limitations of the proposed study lay chiefly in the fact that the dark side of occupation is a concept about such a vast array of human occupations that remain unexplored, or under-explored. Therefore, this protocol could be considered as too broad in scope. However, the review team remain confident that the work they have done to reach the point of agreeing on the final search criteria has been necessary and has led to considerable clarity. Twinley (and Addidle) 2012 Considering violence: the dark side of occupation Arguably, some occupations may not promote health or wellbeing, such as violence, which is seen as harmful, disruptive and therefore 'antisocial'. How then can occupational therapists work from a truly occupational perspective without an understanding of such antisocial occupations? It is suggested that the definition of occupation needs to include aspects of doing that are not deemed as prosocial, healthy or productive, including nonconsensual or deviant sexual acts, drug misuse, alcohol misuse, violence and all other criminal activity. Twinley 2013: Amongst other things, it includes tasks, activities, routines or acts that are considered antisocial, perhaps even criminal and illegal. Use of the term 'dark side' is not intended to portray occupation as having two sides. As the definition and understanding of occupation has evolved, the great majority of accounts do now assert that occupation is something that is complex and multidimensional. It is certainly not something that can be divided into this side and that. However, in many ways the term 'dark side' seems fitting; it suggests occupation is something that has aspects which are less acknowledged, less explored and less understood. It presents occupation as something which has aspects to it that have been left in the shadows. Something that, when prompted to consider, we all know is there, yet something that many of us have not incorporated into our theory, understanding and use of occupation. Perhaps this is because there is an immense dearth of work that clearly incorporates those other aspects of occupation that could be seen to exist as part of the dark side. That is, occupations that could be one of, or a combination of, the following: anti--social; criminal; deviant; violent; disruptive; harmful; unproductive; non--health--giving; non--health--promoting; addictive and politically, socially, religiously or culturally extreme. Occupations that, to the individual performing them, could still be any combination of the following: meaningful, purposeful, creative, engaging, relaxing, enjoyable, entertaining, that can provide a sense of wellbeing and even that are occupational in the sense of being an individual's paid or unpaid work. Twinley 2017: Dark side of occupation: Occupations that remain unexplored-such as those that are health compromising, damaging, and deviant-and which therefore challenge the pervasive belief in a causal relationship between occupation and health. Kiepek 2019: Silences around occupations framed as unhealthy, illegal, and deviant We suggest the term "non-sanctioned occupations" to encompass occupations that, within historically and culturally bound contexts, tend to be viewed as unhealthy, illegal, immoral, abnormal, undesired, unacceptable, and/or inappropriate.
2020-04-02T09:14:26.048Z
2020-03-24T00:00:00.000
{ "year": 2020, "sha1": "64658dd0f819dce316c4aea8c0f279b185efcc4e", "oa_license": "CCBY", "oa_url": "http://journals.ed.ac.uk/social-science-protocols/article/download/4241/5933", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16b5a25a7e312fce51fbeb63122274061c68360e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
244001538
pes2o/s2orc
v3-fos-license
Multi-Stakeholder Impact Environmental Indexes: The Case of NeXt The design of proper environmental and social indicators is one of the most critical challenges when monitoring and implementing corporate and government policy measures toward ecological transitions and sustainable development. In our paper we outline and discuss the characteristics of a new vintage of “living” multi-stakeholder community-based indicators based on the principles of self-evaluation, dialogue and simplification with a specific focus on the NeXt index. We explain the main differences between them and the opposite extreme of static expert-based indicators, how they integrate firm-level scores with compliance with macro multidimensional wellbeing indicators (such as the UN Sustainable Development Goals) and how they complement with ongoing regulatory standards currently under development. As well, we discuss caveats, policy implications and impact in terms of subjective wellbeing. Introduction Environmental and social goals are always bound to be more integrated with traditional economic goals in the coming future. The pressure coming from the ecological transition and financial investors' awareness that environmental sustainability and the reduction of exposure to ESG (environmental, social, governance) risk will be crucial factors for future corporate competitiveness are constantly pushing corporations towards greater integration of these three dimensions. In this new scenario the development of sound and implementable environmental and social indicators becomes a crucial tool enabling companies to evaluate, learn about and signal their progress in ecological transition to consumers and investors, helping them to attract public and private financial resources, as well as increase consumers' willingness to pay for sustainable products. Social and environmental indicators will also be a fundamental intermediate tools for crucial policy related actions such as: (i) developing standards for non-financial reporting metrics (along this path Settembre-Blundo et al. [1] focus on sustainability based risk management mechanisms, while Dwivedi et al. [2] focus on how value chain flexibility/rigidity affect corporate sustainability); (ii) defining admissible investment for private and government green bond issues (the market of private and government green bonds has been dramatically growing in the last years; the Climate Bond Initiative reports that the total volume of issues amounted to an adjusted USD 257.7 billion in 2019, 51% more than in the previous year [3]); (iii) elaborating minimum environmental criteria regulating access to the "institutional vote with the wallet" in public procurement (the importance of green public procurement for sustainable development has been acknowledged with a specific The BES system is a hybrid intermediate example between the static expert-based system and the living community-based system. It includes a process of dialogue, even though neither employ a co-design process nor a joint periodic update involving relevant stakeholders. Additionally-and differently from the NeXt index-BES is directed to measure wellbeing at the geographical level and not at the firm level, and, consequently, the interaction with corporate end-users (starting, in the NeXt indicators case, from corporate self-assessment) is not as relevant as it is the case when measuring corporate sustainability. Beyond the BES process, community-participation approaches have gained prominence in recent years by improving governance and sustainability practices to include fair and valid evidence of impacts and by being the supply of information to the active involvement of stakeholders in projects decision. According to a "bottom-up" perspective, also driven by the awareness of the limits of "top-down" approaches, participatory methods can generate accurate quantitative and qualitative data and they can capture local priorities for greater validity in final decisions. Second, the legitimacy of the final outcome is higher when the potentially affected parties can state their own case before their peers and have an equal chance to influence the outcome (i.e., the process is fair). Third, public participation is identified as the proper conduct of a democratic government in public decision-making activities, since citizens mature into responsible democratic citizens and reaffirm democracy when they become involved in working-out a mutually acceptable solution to a project or problem that affects their community and their personal lives. Furthermore, participants can grow to understand their own strengths and abilities, leading to a sense of empowerment, specifically as in the case of empowerment evaluation [27][28][29][30][31][32][33][34]. More specifically, in the evaluation field, participation combined with both qualitative and quantitative methods of data collection can be suited to the purpose of engaging diverse stakeholders and capturing a system's complexity and dynamics. On these premises, Guba and Lincon [35] proposed the so-called "fourth generation evaluation", according to which evaluators cannot separate themselves from evaluands since data are created within this interaction. In this sense, evaluators must use a hermeneutic dialectic process and carry out their inquiries "in a way that will expose the constructions of the variety of concerned parties, open each to critique in the terms of other constructions, and provide the opportunity for revised or entirely new constructions to emerge". Similarly, integrated impact assessment and collaborative evaluation show the relevance of evaluations in which there is a significant use of varying combinations of survey, qualitative and participatory methods, as well as a certain degree of collaboration between the evaluator and stakeholders in the evaluation process" in order to meet competing demands [29,36]. Nevertheless, such approaches are often associated with little credible evidence on the impact of policy interventions or social projects. On the one hand, several communitybased initiatives remain constrained by the need for quantifiable and 'objectively verifiable indicators' that allow regions to be compared. On the other hand, the few studies relying on rigorous impact evaluation strategies have not evaluated more comprehensive attempts to inform and involve the community in their process. Additionally, in evaluation studies as much as in urban ones, experts are not always involved to overcome a classical problem of active citizenship: the engagement of "really" disempowered citizens-the most vulnerable people, without any chance to affirm their voice [37][38][39]. A Theoretical Sketch of Our Hypothesis In Equations (1)-(5) we sketch a theoretical argument outlining the difference between the NeXt multi-stakeholder community indicators and the static expert-based indicators. We define the quality (by which we mean the capacity to capture, synthetically, the crucial features of a given phenomenon, its granularity (i.e., its capacity to translate different performances of corporate end users in indicator differences on a quantitative scale), the biunivocal correspondence between ranking order and quantitative order of two different performances; using the language of the utility function in economics these properties translate into reflexivity, transitivity and monotonicity) of an indicator (QI) as a function of the incorporated knowledge and experience of the different relevant stakeholders (ST) and corporate end users (CEU) plus the competence and technical skills of the statistical experts (SE). QI Relevant stakeholders are those having skills, experience and competences on the given wellbeing domain (i.e., trade unions for the workers domain, consumers' association for the product quality domain, environmentalist NGOs for the environmental sustainability domain). Statistical experts are those having know-how on the state-of-the-art and methodologies of wellbeing indicators, while end-users are the same object of companies' NeXt scores that accept scrutiny and become users, since its definition confers to them advantages in terms of learning and monitoring their competitive position in an ecological transition. We assume that the knowledge, experience and skills of the three actors do not perfectly overlap. More specifically, we assume that technical experts dispose of all analytical and statistical skills but, without sector specific experience, can miss the fact that some technically valid solutions fail to capture relevant aspects of the reality in that domain, or that it is impossible for corporate end-users to collect reliable information on a given indicator. On the other hand, corporate end-users and relevant stakeholders have important domain and sector specific knowledge but fail to understand how that knowledge can be translated into methodologically rigorous indicators. Relevant stakeholders, corporate end-users and statistical expert abilities are updated following the evolution of the state of affairs in the social, environmental and economic dimension, as in Equation (2): where k, e and s are the different functional forms reflecting how different actors of the index update their skill (w) over time. The degree of social and political acceptance of an indicator, on stakeholders' and corporate end-users' side (SA), is, in turn, a function of its quality (QI), cost (in terms of adoption and compliance) (C), friendliness (F), and involvement (FI). All of the four factors are higher in a living index due to the process of dialogue between experts and stakeholders first producing a co-design of the indicators, followed by periodic consultation for revision. In the opposite extreme, a static expert-based index fails to incorporate information from stakeholders. Its quality is lower and the degree of acceptance by stakeholders is markedly inferior, both due to a lack of involvement in the process (we refer, here, to the theoretical and empirical literature of procedural utility [19] showing that the acceptance of a given decision of an individual depends on the degree of her/his involvement in the process leading to such decision). This theoretical framework makes it easy to understand how a static expert-onlybased index (SEI) is at the extreme opposite of a living multi-stakeholder community (NEXT) index. We, in fact, obtain: and SA t (SEI) = g(f(SE t0 )) < SA t (NEXT) = g(f(CEU t , SE t , Σ i ST it ), C t , F t , Inv t ) Another crucial difference between an SEI and a NeXt index is that the asset of the former is the set of indicators defined at a given point in time. As such the asset is subject to strong depreciation and can be easily imitated. In the NeXt case, the asset is the community of technical experts, relevant stakeholders and corporate end-users in dialogue and consultation. Such asset is not subject to the same rate of depreciation and cannot be easily imitated. To conclude, living multi-stakeholder indexes of higher quality and adoption rates create the conditions for their superior contribution to sustainability progress. The NeXt index, therefore, proves much more suitable for the process of trial, error and update required by the complexity of the task at stake and the evolving nature of the economic scenario. The Process for the Construction of the NeXt Index The basic tool to calculate NeXt indicators is the Participatory Self-Assessment Survey 2.0 (PSAS 2.0) co-designed over time by a community including statistical experts, relevant stakeholders and corporate end users and timely revised at regular intervals (see Figure 1 for a graphic description of the steps required to create the Index (the PSS 2.0-NeXt index can be accessed on-line at www.nexteconomia.org, accessed on 9 October 2020, where corporate end users are asked to register before performing their self-assessment). The group of statistical experts is based on members of the NeXt Scientific Committee (see the list provided in Appendix A). The survey includes five indicators for each of the following six relevant domains: (i) governance; (ii) workers; (iii) consumers; (iv) the environment; (v) suppliers in the value chain and (vi) local communities, for a total of 30 indicators. Scores for each indicator are provided on a discrete qualitative scale from one (minimum) to five (maximum) (questionnaire details, describing each domain and the related indicator, are in Appendix B). Calculus and Ponderation of Individual NeXt Indicators The evaluation process follows two steps. In the first step corporate end-users perform their self-assessment report attributing a score in a one-to-five range to each of the 30 indicators in the six different domains. For any indicator, the survey presents a column where Sustainability 2021, 13, 12364 7 of 20 corporate end users are asked to copy links to corresponding documents supporting their self-assessed score. In the second step of the evaluation process the relevant stakeholders and statistical experts evaluate whether the information provided is consistent with the self-assessed score. If so, they confirm the self-assessed score, otherwise they ask for further evidence consistent with the self-attributed score or revise the latter, consistently with the available information. If statistical experts evaluate that a given indicator does not apply to the given business the indicator is left missing and the overall score is reparametrized using a standard n/n-m correction factor that takes it into account where m is the total number of indicators in the NeXt index and m is the number of missing indicators. Aggregation of NeXt Indicators Aggregation of indicator specific scores from each of the five domains (for total scores in domains and across domains) is performed using the Mazziotta-Pareto [40] Index (MPI) (see Appendix C). This methodological choice has been made to penalize horizontal variability (i.e., companies with higher variability in individual scores) for a given same unweighted arithmetic mean in order to give value to regularity and penalize low scores on some indicators implying poor evaluation from some of the relevant stakeholders. The theoretical rationale is that the logic of "integral" (all-round) sustainability implies a penalty for low scores on a specific indicator or area. The total aggregate scores and total domain scores are rescaled on a 0-100 interval. Calculation of NeXt Indicators' Impacts in Terms of Macroeconomic BES and SDG Domains Each indicator is linked to a priority reference BES and SDG domain. More specifically on this point, the overall structure of the PSAS2.0 NeXt index is based on a two-sided reference framework: (i) An international framework calculating links and the consistency of NeXt indicators with the Sustainable Development Goals of Agenda 2030 (https://unric.org/it/ agenda-2030/, accessed on 9 October 2020), issued in 2015 by the United Nations; this implies that each of the 30 NeXt indicators is linked with a reference priority SDG. Such ink made by connecting the survey indicators, the GRI framework indicators (https://www.globalreporting.org/Pages/default.aspx, accessed on 9 October 2020) and the SDGs to each other. The first match was made by the NeXt Study Center, while the SDGs Compass platform (https://sdgcompass.org/, accessed on 9 October 2020) was used for the second match. This platform helps companies implement coherent business strategies with the social and environmental sustainability indicators set by the UN Agenda 2030. (ii) A national framework calculating links of NeXt indicators with the 12 domains of Benessere Equo e Sostenibile (https://www.istat.it/it/benessere-e-sostenibilit%C3%A0/lamisurazione-del-benessere-(bes)/gli-indicatori-del-bes accessed on 9 October 2020), the Italian multidimentional wellbeing framework designed by Istat [41][42][43], here recalibrated on a corporate basis, becoming BESA, which stands for "fair and sustainable corporate wellbeing" (this implies that each of the 30 NeXt indicators is linked with a reference to a priority BES domain). Link made by connecting the survey indicators, the GRI framework indicators and BES domains to each other. The first match was made by the NeXt Study Center while the BESA theoretical framework was used for the second match. Reference to these three frameworks enables the PSAS2.0-NeXt to calculate the corporate end-user's capacity to generate multidimensional wellbeing, through the activation of network based processes of sustainable development. At the end of the evaluation process the final set of NeXt scores (compared with past evaluations, if applicable) is given by: (i) total aggregate score; (ii) degree of corporate commitment in terms of BES and SDG domains; (iii) domain score and (iv) individual scores for each of the 30 indicators (an example of results is provided in Figure 2). Discussion The success of the living index depends on the level of commitment of the actors from the three involved categories (statistical experts, relevant stakeholders and corporate end-users) and their willingness to participate in and activate the process. This, in turn, will depend on the perceived participation benefits. The benefit of statistical expertise is the refinement of the indicators (and underlying theories and methodologies) with knowledge and experience of the relevant stakeholders and corporate end-users, allowing the design of proof-tested, better fit-for-purpose indicators. The benefits from relevant stakeholders' participation are found in possibility of co-designing tools that can help them to achieve their statutory goals, as represented by the wellbeing of their stakeholder category. The living indicators can, in fact, create a dialogue with corporate end-users that fosters progress toward higher labor dignity and worker satisfaction (the goal of trade unions), higher product quality and consumers satisfaction (the goal of consumers' association), greater environmental sustainability (the goal of environmentalists' association) and a higher quality of life of local communities. The goal of other NGOs and organizations included among relevant stakeholders). The benefit of involving corporate end-users consists in having a dashboard of indicators that allows them to monitor progress and position in terms of stakeholder satisfaction. Monitoring such position is going to be increasingly relevant, given the recent strategies and orientations of financial investors and regulators; the CEO of the first global investment fund, BlackRock, in its 2018 letter to CEOs said that "Without a sense of purpose, no company, either public or private, can achieve its full potential. It will ultimately lose the license to operate from key stakeholders. It will succumb to short-term pressures to distribute earnings, and, in the process, sacrifice investments in employee development, innovation, and capital expenditures that are necessary for long-term growth. It will remain exposed to activist campaigns that articulate a clearer goal, even if that goal serves only the shortest and narrowest of objectives. And ultimately, that company will provide subpar returns to the investors who depend on it to finance their retirement, home purchases, or higher education" [44]. From this perspective the living index tends to create a separating equilibrium among potential corporate end-users. On the one hand are those reporting high scores, who regard, as optimal, their reputational gains from publicizing their scores (thereby earning a place on the NeXt online, geo-referenced good-practice map that can lead to these individual scores, https://www.nexteconomia.org/, accessed on 6 September 2020). On the other hand are those having lower scores, who prefer they not become public but nonetheless find it important to calculate the values of the indicators in order to monitor their positions with relevant stakeholders. An important limit of the NeXt approach is that its score attribution, starting from the corporate end-user self-assessment, can certainly reduce costs and simplify the process, at the risk, however, of self-reporting bias. The problem relates to the main issue of green and social washing [45], wherein washing becomes the corporate choice when gains from upward-biased self-declared corporate responsibility are higher than expected costs (probability of being detected times the cost of "punishment") of loss of reputation once caught [46,47]. The NeXt approach corrects for this in three ways. First, it defines strict correspondence between objective outcomes and indicator scores for items where this is possible (i.e., the ratio between the top and bottom corporate wages, taking a value of one to five, according to different intervals of the corporate top-bottom wage ratio). Second, it asks companies to provide evidence and documentation, where available, to justify their own self-assessment. Third and more important, it asks relevant stakeholders to evaluate such self-reported scores. The advantage of the living-index-participation approach is, therefore, that of providing an immediate stakeholder check of corporate declarations, thereby increasing the expected cost of being detected in terms of the timing of audits, the quality of information and country-representativeness of the auditing stakeholders). This is expected to reduce the temptation toward green and social washing, despite of the opportunity to self-report one's own level of corporate responsibility. A second crucial issue is how NeXt indicators interact with existing regulation in progress. As is well known, there has been a growing effort in incorporating ESG factors in the financial industry in recent years. According to a recent PWC survey, 77% of global fund managers plan to exclude stocks with low ESG standards from their portfolios in the next two years, and most of them calculate the exposure of their stock to ESG risk, considered a risk factor independent from those traditionally considered (https://www.bloomberg.com/news/articles/2020-10-19 /almost-60-of-mutual-fund-assets-will-be-esg-by-2025-pwc-says, accessed on 3 January 2021). Given the growing relevance of corporate social responsibility (CSR) concerns and the willingness of responsible financial investors to pay for it, the temptation of fraudulent CSR reporting grows and, with it, the risk of greenwashing if the expected gains are higher than the expected costs of detection and punishment, in economic and reputational terms. This is why EU institutions have launched two main initiatives. The first is the EU Taxonomy on Sustainable Activities (https://ec.europa.eu/info/business-economy-euro/banking-and-finance/sustainablefinance/eu-taxonomy-sustainable-activities_en, accessed on 7 November 2020), in which the characteristics of investment that can be regarded as sustainable in each of the six domains (climate adaptation, climate mitigation, circular economy, pollution, water, biodiversity) are progressively defined for each industry. The second is the regulation on sustainability-related disclosure in the financial services sector (https://ec.europa.eu/info/business-economy-euro/ banking-and-finance/sustainable-nance/sustainability-related-disclosure-financial-services-sector_it, accessed on 8 October 2020) that is redefining ESG disclosure precisely to address greenwashing. According to this regulation, investment funds can promote their ESG characteristics to investors only if they rigorously report progress in the environmental quality of their stock portfolio and alignment with the EU Taxonomy for the so-called Article-8 and Article-9 products. Differently from the recent EU regulation that mainly concerns large capitalization of listed securities, the NeXt approach is also implementable also for small-and medium-sized companies (the large majority, especially in European economies) and covers a wider range of CSR domains, not limiting its scope to environmental issues. The issue of following as close as possible (or not falling into contradiction with the two ongoing regulatory processes) nonetheless applies when the different measurement paths apply to the same companies. A crucial issue arising with the living indicator approach, similar to the circumstances for static indicators, is the risk of not speaking the same language, as international standards are going to be progressively created in the field. The reasons why occur are, however, different between these two cases. In static expert-based indicators, this occurs because of the missing revision process. In the case of living indicators, it can happen because the dynamic evolution driven by the interaction among participants can lead to directions that do not converge on international standards. A living index has, however, two strategies for coping with the problem. The first is endogenous in the process, since all participants feel the need to comply with international standards and to push toward them. The second is that the system includes methodologies that translate the original indicators into effort in standard classification domains (as is the case for BES and SDGs). Conclusions: Limits and Direction for Future Research Social and environmental indicators will play an increasing role in the future of ecological transition under the pressure of the urgent transformation required by the climate challenge and the induced reforms of the regulatory framework. In our paper, we have argued that the move from static expert-only based indicators to "living" multi-stakeholder community-based indicators, developed through participatory processes and co-design between statistical experts, end-users and relevant stakeholders, is crucial to the quality of indicators and their success, in terms of adoption by end-users and society overall. While, in the first type of index, the main asset is the static set of indicators and, as such, it is subject to rapid obsolescence and depreciation, in the second type of (living multi-stakeholder) index, the asset is the dialogue and interaction (co-design and periodic revision) within a heterogeneous community of technical experts, relevant stakeholders and end-users. We have also emphasized that a living community-based index has other important advantages, as it fosters a process of learning among participants, it simplifies and reduces the costs of reporting for companies and is, therefore, easily implementable, even by smalland medium-sized firms, allowing them to keep pace with and monitor their progress toward ecological transition. The main policy conclusion of our research is that the development of communitybased living indicators can significantly improve upon traditional ones in several respects, such as better considering the points of view of end-users and relevant stakeholders, leading to easier social acceptance, easier implementation for small-and medium-sized firms, timely updating and greater involvement and participation from all the relevant actors in society. Owing to these properties, such indicators have the advantage of more effectively stimulating involvement in ecological transition goals and, therefore, progress in sustainability. Appendix B Table A2. The NeXt Participatory Self-Assessment Survey 2.0 (PSAS 2.0): areas and indicators. Transparency on shareholders and sources of capital Criterion: transparency on capital ownership with respect to a control group (percentage value). For example: if the main shareholders are X (15%), Y (12%) and Z (8%), the information concerns 35% of the ownership. -Less than 10% (score 1) -11-30% (score 2) -31-50% (score 3) -51-70% (score 4) -Greater than 70% (score 5) The main policy conclusion of our research is that the development of communitybased living indicators can significantly improve upon traditional ones in several respects, such as better considering the points of view of end-users and relevant stakeholders, leading to easier social acceptance, easier implementation for small-and medium-sized firms, timely updating and greater involvement and participation from all the relevant actors in society. Owing to these properties, such indicators have the advantage of more effectively stimulating involvement in ecological transition goals and, therefore, progress in sustainability. AREAS INDICATORS LINK with SDGs and BES THE CORPORATE AND ITS GOVERNANC Transparency on shareholders and sources of capital Criterion: transparency on capital ownership with respect to a control group (percentage value). For example: if the main shareholders are X (15%), Y (12%) and Z (8%), the information concerns 35% of the ownership. -Less than 10% (score 1) Greater than 70% (score 5) 1.2. Corporate culture and actions against illegality and corruption Criterion: control of suppliers' legality and transparency, to be expressed in percentage terms with respect to the controlled suppliers total amount -Less than 10% (score 1) Greater than 70% (score 5) Management strategy and attention to diverse stakeholders Criterion: levels and modes of stakeholders' engagement, to be expressed through numerical values - The firm disregards stakeholders' engagement (score 1) Corporate culture and actions against illegality and corruption Criterion: control of suppliers' legality and transparency, to be expressed in percentage terms with respect to the controlled suppliers total amount -Less than 10% (score 1) -11-30% (score 2) -31-50% (score 3) -51-70% (score 4) -Greater than 70% (score 5) The main policy conclusion of our research is that the development of communitybased living indicators can significantly improve upon traditional ones in several respects, such as better considering the points of view of end-users and relevant stakeholders, leading to easier social acceptance, easier implementation for small-and medium-sized firms, timely updating and greater involvement and participation from all the relevant actors in society. Owing to these properties, such indicators have the advantage of more effectively stimulating involvement in ecological transition goals and, therefore, progress in sustainability. INDICATORS LINK with SDGs and BES THE CORPORATE AND ITS GOVERNANC Transparency on shareholders and sources of capital Criterion: transparency on capital ownership with respect to a control group (percentage value). For example: if the main shareholders are X (15%), Y (12%) and Z (8%), the information concerns 35% of the ownership. Management strategy and attention to diverse stakeholders Criterion: levels and modes of stakeholders' engagement, to be expressed through numerical values - The firm disregards stakeholders' engagement (score 1) Management strategy and attention to diverse stakeholders Criterion: levels and modes of stakeholders' engagement, to be expressed through numerical values -The firm disregards stakeholders' engagement (score 1) - The firm is aware of the stakeholders' engagement value, but there is no direct involvement (for example, the company only engages them via indirect links and online research) (score 2) - The firm is aware of the stakeholders' engagement value and their direct involvement (for example, one meeting with stakeholders) (score 3) - The firm dialogues with its stakeholders and also involves them in corporate strategy decisions (for example, at least two meetings with stakeholders (score 4) The firm dialogues with its stakeholders, involves them in corporate strategy decisions, and measures stakeholders' satisfaction levels (for example, at least three meetings with stakeholders and measurement of satisfaction level for each of them) Management strategy and attention to diverse stakeholders Criterion: levels and modes of stakeholders' engagement, to be expressed through numerical values - The firm disregards stakeholders' engagement (score 1) - The firm is aware of the stakeholders' engagement value, but there is no direct involvement (for example, the company only engages them via indirect links and online research) (score 2) - The firm is aware of the stakeholders' engagement value, but there is no direct involvement (for example, the company only engages them via indirect links and online research) (score 2) -The firm is aware of the stakeholders' engagement value and their direct involvement (for example, one meeting with stakeholders) (score 3) - The firm dialogues with its stakeholders and also involves them in corporate strategy decisions (for example, at least two meetings with stakeholders (score 4) The firm dialogues with its stakeholders, involves them in corporate strategy decisions, and measures stakeholders' satisfaction levels (for example, at least three meetings with stakeholders and measurement of satisfaction level for each of them) 1.4. Employee participation and involvement in corporate strategy decisions Criterion: stakeholders' engagement in corporate strategy decisions, to be expressed in percentage terms (100% stands for their engagement in every corporate decisions made) -None (score 1) - Consulting employees for less than 30% of corporate decisions (score 2) -Consulting employees for more than 30% of corporate decisions (score 3) -Sharing and asking for employees' participation in less than 30% of corporate strategy decisions (score 4) -Sharing and asking for employees' participation in more than 30% of corporate strategy decisions (score 5) * Explain what kind of decision is shared. 1.5. Differential between min. and max. remunerations within the company Criterion: differential between the maximum annual remuneration for the best paid and the minimum annual remuneration for the least paid. -Less than 6 (score 5) More than 80% (score 5) 1.5. Differential between min. and max. remunerations within the company Criterion: differential between the maximum annual remuneration for the best paid and the minimum annual remuneration for the least paid. The firm is aware of the stakeholders' engagement value and their direct involvement (for example, one meeting with stakeholders) (score 3) - The firm dialogues with its stakeholders and also involves them in corporate strategy decisions (for example, at least two meetings with stakeholders (score 4) The firm dialogues with its stakeholders, involves them in corporate strategy decisions, and measures stakeholders' satisfaction levels (for example, at least three meetings with stakeholders and measurement of satisfaction level for each of them) Employee participation and involvement in corporate strategy decisions Criterion: stakeholders' engagement in corporate strategy decisions, to be expressed in percentage terms (100% stands for their engagement in every corporate decisions made) -None (score 1) - Consulting employees for less than 30% of corporate decisions (score 2) -Consulting employees for more than 30% of corporate decisions (score 3) -Sharing and asking for employees' participation in less than 30% of corporate strategy decisions (score 4) -Sharing and asking for employees' participation in more than 30% of corporate strategy decisions (score 5) * Explain what kind of decision is shared. 1.5. Differential between min. and max. remunerations within the company Criterion: differential between the maximum annual remuneration for the best paid and the minimum annual remuneration for the least paid. More than 80% (score 5) * To be applied to companies with more than 100 employees only. For companies with less than 100 employees: express the company's own value, explaining the choice on the basis of employees' participation/ engagement. * Explain what kind of decision is shared. 1.5. Differential between min. and max. remunerations within the company Criterion: differential between the maximum annual remuneration for the best paid and the minimum annual remuneration for the least paid. More than 80% (score 5) Respect for employee dignity through fair remuneration (concerning work schedule, tasks performed, and responsibilities assigned) Criterion: positive differential between the total amount of remunerations paid by the company and the minimum levels set by the main union contracts (annual basis), to be expressed in percentage terms None (score 1) Less than 5% (score 2) 5-10% (score 3) 11-20% (score 4) More than 20% (score 5) * To be applied to companies with more than 50 employees only. In any other case, to be considered as "not applicable" Sustainability 2021, 13, 12364 13 of 20 * To be applied to companies with more than 100 employees only. For companies with less than 100 employees: express the company's own value, explaining the choice on the basis of employees' participation/ engagement. Respect for employee dignity through fair remuneration (concerning work schedule, tasks performed, and responsibilities assigned) Criterion: positive differential between the total amount of remunerations paid by the company and the minimum levels set by the main union contracts (annual basis), to be expressed in percentage terms -None (score 1) -Less than 5% (score 2) -5-10% (score 3) -11-20% (score 4) -More than 20% (score 5) * To be applied to companies with more than 50 employees only. In any other case, to be considered as "not applicable" 2.3. Dialogue with workers representatives on health and safety at work Criterion: attendance and engagement (of both informative and consultative kind) of one workers' representative for safety and one workers' representative for territorial safety -None (score 1) - Attended, but neither informed nor consulted (score 2) - Attended, but informed only on a few aspects (score 3) -Attended and informed on all aspects (e.g.: accidents at work, risk assessment, prevention and organizational measures, etc.) (score 4) -Attended, informed, and consulted on all aspects (score 5) 2.4. Work-Life balance (smart working, gender opportunities, etc.) Criterion: attendance and diversity of work-life balance agreements -None (score 1) - One agreement or unilateral decision on work-life balance for a specific employee category (score 2) -One agreement or unilateral decision on work-life balance for all employee categories (score 3) -Two agreements or unilateral decisions on work-life balance for a specific employee category or for all employee categories (score 4) -More than two agreements or unilateral decisions on worklife balance for a specific employee category or for all employee categories (score 5) Dialogue with workers representatives on health and safety at work Criterion: attendance and engagement (of both informative and consultative kind) of one workers' representative for safety and one workers' representative for territorial safety -None (score 1) -Attended, but neither informed nor consulted (score 2) -Attended, but informed only on a few aspects (score 3) -Attended and informed on all aspects (e.g.: accidents at work, risk assessment, prevention and organizational measures, etc.) (score 4) -Attended, informed, and consulted on all aspects (score 5) Sustainability 2021, 13, 12364 13 of 20 * To be applied to companies with more than 100 employees only. For companies with less than 100 employees: express the company's own value, explaining the choice on the basis of employees' participation/ engagement. Respect for employee dignity through fair remuneration (concerning work schedule, tasks performed, and responsibilities assigned) Criterion: positive differential between the total amount of remunerations paid by the company and the minimum levels set by the main union contracts (annual basis), to be expressed in percentage terms -None (score 1) -Less than 5% (score 2) -5-10% (score 3) -11-20% (score 4) -More than 20% (score 5) * To be applied to companies with more than 50 employees only. In any other case, to be considered as "not applicable" 2.3. Dialogue with workers representatives on health and safety at work Criterion: attendance and engagement (of both informative and consultative kind) of one workers' representative for safety and one workers' representative for territorial safety -None (score 1) - Attended, but neither informed nor consulted (score 2) - Attended, but informed only on a few aspects (score 3) -Attended and informed on all aspects (e. * To be applied to companies with more than 100 employees only. For companies with less than 100 employees: express the company's own value, explaining the choice on the basis of employees' participation/ engagement. Respect for employee dignity through fair remuneration (concerning work schedule, tasks performed, and responsibilities assigned) Criterion: positive differential between the total amount of remunerations paid by the company and the minimum levels set by the main union contracts (annual basis), to be expressed in percentage terms -None (score 1) -Less than 5% (score 2) -5-10% (score 3) -11-20% (score 4) -More than 20% (score 5) * To be applied to companies with more than 50 employees only. In any other case, to be considered as "not applicable" 2.3. Dialogue with workers representatives on health and safety at work Criterion: attendance and engagement (of both informative and consultative kind) of one workers' representative for safety and one workers' representative for territorial safety -None (score 1) - Attended, but neither informed nor consulted (score 2) - Attended, but informed only on a few aspects (score 3) -Attended and informed on all aspects (e.g.: accidents at work, risk assessment, prevention and organizational measures, etc.) (score 4) -Attended, informed, and consulted on all aspects (score 5) 2.4. Work-Life balance (smart working, gender opportunities, etc.) Criterion: attendance and diversity of work-life balance agreements -None (score 1) - One agreement or unilateral decision on work-life balance for a specific employee category (score 2) -One agreement or unilateral decision on work-life balance for all employee categories (score 3) -Two agreements or unilateral decisions on work-life balance for a specific employee category or for all employee categories (score 4) -More than two agreements or unilateral decisions on worklife balance for a specific employee category or for all employee categories (score 5) 2.5. Employee career development, rewarding employees skills and experience through training and lifelong learning agreements -None (score 1) -One agreement or unilateral decision on work-life balance for a specific employee category (score 2) -One agreement or unilateral decision on work-life balance for all employee categories (score 3) -Two agreements or unilateral decisions on work-life balance for a specific employee category or for all employee categories (score 4) -More than two agreements or unilateral decisions on worklife balance for a specific employee category or for all employee categories (score 5) 2.5. Employee career development, rewarding employees skills and experience through training and lifelong learning Criterion: for each employee, annual average of training and continuing education hours -Less than 10 (score 1) Unilateral dialogue (e.g., toll-free number) (score 2) -Regulated dialogue (e.g., regulated toll-free number) (score 3) -Digital/ analogue channels with precise guidelines (score 4) -Digital/ analogue channels with dedicated employee(s), in accordance with corporate mission and culture (score 5) Full and documented information on the environmental and social sustainability of products/ services and all related processes, available to customers Criterion: information on products/ service available on labels and informative material -Information available on labels as legally required (score 1) -Additional information available on labels, beyond the legally required information (score 2) -Additional information available on labels, through a link to the corporate website (score 3) -Additional information available on labels about supply chain traceability (score 4) -Additional information about supply chain through ICT/ multimedia systems (e.g., blockchain, GS 1 barcode) (score 5) * To be applied to companies developing services for citizens only 3.3. Customers' valorization as a stimulus for partnership innovations and co-design of products/services Criterion: attendance and diversity of interaction modes with clients - The firm disregards customers' suggestions and indications (score 1) - The firm considers customers' suggestions and indications (score 2) - The firm interacts with single customers' (e.g., through social media and F.A.Q.) (score 3) - The firm interacts with consumers associations (score 4) - The firm develops shared improvement actions (score 5) 3.4. Effective ways for complaint management and resolution, guaranteeing proper response times and satisfaction levels Criterion: attendance and diversity of complaint management strategies Full and documented information on the environmental and social sustainability of products/services and all related processes, available to customers Criterion: information on products/service available on labels and informative material -Information available on labels as legally required (score 1) -Additional information available on labels, beyond the legally required information (score 2) -Additional information available on labels, through a link to the corporate website (score 3) -Additional information available on labels about supply chain traceability (score 4) -Additional information about supply chain through ICT/multimedia systems (e.g., blockchain, GS 1 barcode) (score 5) * To be applied to companies developing services for citizens only Unilateral dialogue (e.g., toll-free number) (score 2) -Regulated dialogue (e.g., regulated toll-free number) (score 3) -Digital/ analogue channels with precise guidelines (score 4) -Digital/ analogue channels with dedicated employee(s), in accordance with corporate mission and culture (score 5) Full and documented information on the environmental and social sustainability of products/ services and all related processes, available to customers Criterion: information on products/ service available on labels and informative material -Information available on labels as legally required (score 1) -Additional information available on labels, beyond the legally required information (score 2) -Additional information available on labels, through a link to the corporate website (score 3) -Additional information available on labels about supply chain traceability (score 4) -Additional information about supply chain through ICT/ multimedia systems (e.g., blockchain, GS 1 barcode) (score 5) * To be applied to companies developing services for citizens only Customers' valorization as a stimulus for partnership innovations and co-design of products/services Criterion: attendance and diversity of interaction modes with clients - The firm disregards customers' suggestions and indications (score 1) - The firm considers customers' suggestions and indications (score 2) - The firm interacts with single customers' (e.g., through social media and F.A.Q.) (score 3) - The firm interacts with consumers associations (score 4) - The firm develops shared improvement actions (score 5) 3.4. Effective ways for complaint management and resolution, guaranteeing proper response times and satisfaction levels Criterion: attendance and diversity of complaint management strategies - No way of contact with customers after-sale (score 1) -Unregulated and unilateral after-sale contact with customers Customers' valorization as a stimulus for partnership innovations and co-design of products/services Criterion: attendance and diversity of interaction modes with clients - The firm disregards customers' suggestions and indications (score 1) - The firm considers customers' suggestions and indications (score 2) - The firm interacts with single customers' (e.g., through social media and F.A.Q.) (score 3) - The firm interacts with consumers associations (score 4) - The firm develops shared improvement actions (score 5) Unilateral dialogue (e.g., toll-free number) (score 2) -Regulated dialogue (e.g., regulated toll-free number) (score 3) -Digital/ analogue channels with precise guidelines (score 4) -Digital/ analogue channels with dedicated employee(s), in accordance with corporate mission and culture (score 5) Full and documented information on the environmental and social sustainability of products/ services and all related processes, available to customers Criterion: information on products/ service available on labels and informative material -Information available on labels as legally required (score 1) -Additional information available on labels, beyond the legally required information (score 2) -Additional information available on labels, through a link to the corporate website (score 3) -Additional information available on labels about supply chain traceability (score 4) -Additional information about supply chain through ICT/ multimedia systems (e.g., blockchain, GS 1 barcode) (score 5) * To be applied to companies developing services for citizens only 3.3. Customers' valorization as a stimulus for partnership innovations and co-design of products/services Criterion: attendance and diversity of interaction modes with clients - The firm disregards customers' suggestions and indications (score 1) - The firm considers customers' suggestions and indications (score 2) - The firm interacts with single customers' (e.g., through social media and F.A.Q.) (score 3) - The firm interacts with consumers associations (score 4) - The firm develops shared improvement actions (score 5) 3.4. Effective ways for complaint management and resolution, guaranteeing proper response times and satisfaction levels Criterion: attendance and diversity of complaint management strategies - No way of contact with customers after-sale (score 1) -Unregulated and unilateral after-sale contact with customers (e.g., online form) (score 2) Table A2. Cont. RELATIONSHIPS WITH CITIZENS AND CONSUMERS 3.4. Effective ways for complaint management and resolution, guaranteeing proper response times and satisfaction levels Criterion: attendance and diversity of complaint management strategies - No way of contact with customers after-sale (score 1) -Unregulated and unilateral after-sale contact with customers (e.g., online form) (score 2) -Direct after-sale contact with customers (score 3) -Regulated and direct after-sale contact with customers (score 4) -Joint conciliation and activation of stable partnerships with consumers associations (e.g., ethical and control committees created with consumers associations in order to monitor processes and all tracking criteria) (score 5) 3.3. Customers' valorization as a stimulus for partnership innovations and co-design of products/services Criterion: attendance and diversity of interaction modes with clients - The firm disregards customers' suggestions and indications (score 1) - The firm considers customers' suggestions and indications (score 2) - The firm interacts with single customers' (e.g., through social media and F.A.Q.) (score 3) - The firm interacts with consumers associations (score 4) - The firm develops shared improvement actions (score 5) 3.4. Effective ways for complaint management and resolution, guaranteeing proper response times and satisfaction levels Criterion: attendance and diversity of complaint management strategies - No way of contact with customers after-sale (score 1) -Unregulated and unilateral after-sale contact with customers (e.g., online form) (score 2) Activation of criteria and procedures concerning the choice of direct suppliers and their socio-environmental sustainability Criterion: relationship between sustainable suppliers and all suppliers, to be expressed in percentage terms (avoiding minimum price bid auctions without concern for environmental and social criteria and choices based on cost savings only) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Greater than 60% (score 5) Adoption and applications of monitoring tools by suppliers on the socio-environmental sustainability Criterion: monitoring suppliers' care towards ethics and human rights, through local visits as well as interviews to managers and employees, to be expressed in percentage terms (percent of the value share of monitored suppliers on total suppliers value) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) Regulated and direct after-sale contact with customers (score 4) -Joint conciliation and activation of stable partnerships with consumers associations (e.g., ethical and control committees created with consumers associations in order to monitor processes and all tracking criteria) (score 5) Activation of criteria and procedures concerning the choice of direct suppliers and their socio-environmental sustainability Criterion: relationship between sustainable suppliers and all suppliers, to be expressed in percentage terms (avoiding minimum price bid auctions without concern for environmental and social criteria and choices based on cost savings only) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Greater than 60% (score 5) Adoption and applications of monitoring tools by suppliers on the socio-environmental sustainability Criterion: monitoring suppliers' care towards ethics and human rights, through local visits as well as interviews to managers and employees, to be expressed in percentage terms (percent of the value share of monitored suppliers on total suppliers value) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) Activation of criteria and procedures concerning the choice of direct suppliers and their socio-environmental sustainability Criterion: relationship between sustainable suppliers and all suppliers, to be expressed in percentage terms (avoiding minimum price bid auctions without concern for environmental and social criteria and choices based on cost savings only) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Greater than 60% (score 5) Sustainability 2021, 13, 12364 15 of 20 -Direct after-sale contact with customers (score 3) -Regulated and direct after-sale contact with customers (score 4) -Joint conciliation and activation of stable partnerships with consumers associations (e.g., ethical and control committees created with consumers associations in order to monitor processes and all tracking criteria) (score 5) 3.5. Measurement of customers satisfaction rate (percent of customers at least satisfied customers) Criterion: customer satisfaction rate -Less than 60% (score 1) -60-70% (score 2) -71-80% (score 3) -81-90% (score 4) - Higher than 90% (score 5) THE SUPPLY CHAIN Activation of criteria and procedures concerning the choice of direct suppliers and their socio-environmental sustainability Criterion: relationship between sustainable suppliers and all suppliers, to be expressed in percentage terms (avoiding minimum price bid auctions without concern for environmental and social criteria and choices based on cost savings only) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Greater than 60% (score 5) Adoption and applications of monitoring tools by suppliers on the socio-environmental sustainability Criterion: monitoring suppliers' care towards ethics and human rights, through local visits as well as interviews to managers and employees, to be expressed in percentage terms (percent of the value share of monitored suppliers on total suppliers value) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) - Higher than 60% (score 5) Table A2. Cont. Adoption and applications of monitoring tools by suppliers on the socio-environmental sustainability Criterion: monitoring suppliers' care towards ethics and human rights, through local visits as well as interviews to managers and employees, to be expressed in percentage terms (percent of the value share of monitored suppliers on total suppliers value) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Higher than 60% (score 5) THE SUPPLY CHAIN Criterion: relationship between sustainable suppliers and all suppliers, to be expressed in percentage terms (avoiding minimum price bid auctions without concern for environmental and social criteria and choices based on cost savings only) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) -Greater than 60% (score 5) Adoption and applications of monitoring tools by suppliers on the socio-environmental sustainability Criterion: monitoring suppliers' care towards ethics and human rights, through local visits as well as interviews to managers and employees, to be expressed in percentage terms (percent of the value share of monitored suppliers on total suppliers value) -None (score 1) -Less than 10% (score 2) -10-30% (score 3) -31-60% (score 4) - Higher than 60% (score 5)
2021-11-12T16:28:27.210Z
2021-11-09T00:00:00.000
{ "year": 2021, "sha1": "2ce4739114ac8dd7392f5f4e39abfec34aa287dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/22/12364/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "25de017972a014c2a3d545681164ba1fadd2a386", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [] }
208485731
pes2o/s2orc
v3-fos-license
evaluationof prenatal assistance based on a benchmark of donabedian Evaluative study of a quantitative approach aiming to evaluate the prenatal care in primary health care under the point of view of pregnancy based on the benchmark of Donabedian. The sample comprised of 195 pregnant women monitored in the prenatal in 20 primary health care units (UAPS) in Fortaleza Ceará Brazil. Among pregnant women, 87.7% were in the age group between 18 and 35 years; 94.4% attended the elementary and secondary education. On the evaluation of the structure, the pregnant women were dissatisfied. With regard to process, the nurses were available to listen humanized. With regard to the result, 60.5% of pregnant women were satisfied with the attention received in the primary health care units (UAPS). Therefore, when the operation of UAPS, the interventions and the relationships between users and professionals were adequate, provided greater satisfaction in pregnant women and, consequently, could contribute to the promotion of their health and well-being. Introduction The Prenatal Assistance (APN) is a protective tool for maternal and child health, as it allows the monitoring of pregnancy, guiding and helping to manage certain risk factors, prevent diseases and/or complications, thus creating a bond of trust between the pregnant woman and the professional [1]. In order to improve the quality of APN offered in the country since the confirmation of the pregnancy until the first two years of a baby's life, the Brazilian Government created the Stork Network (Rede Cegonha) [2]. A service of quality in APN holds an important role in reducing maternal mortality, in addition to other benefits for maternal and child health. The Brazil and 10 other Latin American countries have won significant advances in the reduction of deaths related to pregnancy or childbirth of 1990 to 2013. The Brazil reduced its rate of maternal deaths in 43.0% since the 90. However, the World Health Organization (WHO) warns that none of the countries in the region are able to achieve the goal of the Millennium development goals of reducing maternal mortality rate 75. 0% until 2015 [3]. It is estimated that among the 289,000 maternal deaths for the same complications in 2013, there was a reduction of only 45% when compared to 523,000 deaths in 1990. Considering the fifth Millennium Development Goal (MDG), only 11 countries have already conquered the 75.0 percent reduction goal (six in Asia, four in Africa and one in Europe-Romania) [3]. In Ceará (CE) . The main problems associated with low reduction in MM are: low access to reproductive planning; low quality of Prenatal (PN); delay in referral to the high-risk PN; lack of active search defaulting to consultations of pregnant women; complications related to pregnancy, childbirth and puerperium; low valuation of complaints and clinical; delay in decision-making in the care of complications and urgencies; do not perform the consultation of the puerperium [3]. This reality requires a thorough reflection of managers and health professionals about the conditions under which women give birth to their children and giving birth to come and, mostly, about the quality of care received during the period of pregnancy and child birth in the view of the relevance of the APN for the promotion of maternal health and the concept we ask: how is being held this assistance in the basic attention in Fortaleza-CE? Based on this question, we opted for this study in order to evaluate the prenatal care in primary health care under the point of view of pregnancy based on the benchmark of Donabedian. Although the evaluation for improving the quality of the family health strategy proposed by the Ministry of health adopts as conceptual reference, the model proposed by Donabedian [5], based on systems theory in which consider themselves the elements of structure, process and result, focusing health services analysis and their practices [6]. Methodology Evaluative study with quantitative approach, conducted in 20 Primary Care Health Units (UAPS) located in Regional Executive Secretariat VI in Fortaleza-CE. This secretariat has been chosen because it presents greater scope, among the six, covering the metropolitan region of Fortaleza. The population consisted of 3000 pregnant women who were in the Regional Register in the year 2015. The sample was calculated using the formula for finite population, using the parameters: a) confidence level 95%, sampling error of 6.93 and p=0.05, obtaining a sample of 195 pregnant women. Included pregnant women in prenatal monitoring, independent of gestational age, who attended at least two nurses and two medical queries, and who have emotional and physical condition to answer questions. Data collection happened during the period from February to May of 2016, through structured interview, whose instrument contained the sociodemographic aspects, and assessment indicators proposed by Donabedian [5], namely: Structure: Corresponds to relatively stable characteristics and necessary assistance process, and covers-physical area, human resources, Material and financial resources, information systems, regulatory instruments, technical-administrative instruments, organizational conditions. Process: Is a set of activities developed between professionals and users, consisting of the relationship between these actors, host mode, active listening. Result: Is the product of the actions, including the satisfaction of pregnant women, motivation for an effective participation, creation and adherence to conduct link, i.e., prenatal care for the prevention and/or control of health risk factors. Note that the instrument was pretested in five pregnant women, but these were not included in the sample. It should be distinguished that the interview was held at UAPS during women's attendance to routine calls and after registering your consent on informed consent. The duration of the interview ranged from 30 to 40 minutes. The data were organized into the Statistical Package for the Social Sciences (SPSS, version 20.0) and represented in the form of tables and figures. Data were analysed by means of frequencies analysis (Absolute and Relative) and by the following statistical tests: factor analysis, Kaiser-Meyer-Olkin (KMO) and Bartlett's sphericity. Then confronted the data with selected literature and reference Donabedian [5]. This study was carried out in accordance with resolution 466/12, the National Commission of ethics in research (CONEP/ CNS/MS) [7] was submitted and approved by the ethics and Research Committee (CEP) at the University of Fortaleza-UNIFOR, under the Protocol number 11-578. Structure According to Table 1, the majority of pregnant women considered welcoming, the physical structure of the offices (60.0%) and sorting (54.4%). As to the proper temperature emphasized the nurse's Office (85.1%), the doctor's Office (80.0%) and sorting (56.9%). Only the physical structure of the screening presented statistically significant relationship (p = 0.010). In Table 2, in referring to the system of appointments; 96.4% pregnant women mentioned be at present, 89.2% by order of arrival, 55.9% declared that it was every day, 85.1% said that the range was once a month, 53.3% reported existence of scheduling priorities according to 87.7% pregnant women, the duration of the nursing consultation was adequate, and 74.4%, medical consultation was also. About of 64.4% received information about the functioning of UAPS. Only 33.8% stated that the professionals were identified with the name and the position on the badge, and 92.3% reported the existence of the queues to attendance. It should be noted who 76.4% of pregnant women have stated that it was intended for a week a day on UAPS to be serviced. Professional identification badge (p = 0.046) and appropriate nursing query duration (p = 0.049) showed statistically significant relationship. The professionals who attended the pregnant women were predominantly nurses (93.3%). The reception of the users is performed by staff of the Statistical Medical File System (72.3%). Although the consultations are scheduled for the three shifts, pregnant women were, preferably, met in the morning. Process In Table 3, most pregnant women a satisfactory assessment in the work process of the nurse and doctor in all indicators. More indicators answered by women as regards community health Agents: appropriate language (94.1%); freedom to verbalization (93.4%); enhancement of verbalization (94.1%); the user by name (92.8%); the existence of dialogue (78.9%); information from Office holding and name (88.8%); and providing guidance on preventive ducts (50.8%). In nursing assistants/Nursing technicians (AE/TE), stood out: appropriate language (77.4%); freedom to verbalization (62.6%); and valorization of verbalization (61.5%). There were statistically significant correlations, the appropriate language (p=0.022), freedom to verbalization (p = 0.036), information about the name and title (p=0.030). Similarly, the name and recipes readable and self-explanatory (p = 0.026) in the nursing consultation, preventive therapeutic pipelines (p = 0.016), tests (p = 0.010) and medication (p = 0.046) on medical consultation showed significant correlations. In the service of the technician or nursing assistant, it was found a statistically significant correlation in the call of the pregnant woman by name (p = 0.043), existence (p = 0.014), guidelines on health condition (p = 0.050), and explanation of the procedures performed (p = 0.025); and for the ACS, appropriate language (p = 0.032), freedom to verbalization (p = 0.015) and information Office (p = 0.017). Result Analysing the Table 4, we find that most (60.5%) of pregnant women were satisfied with the attention received. However, 72.3% admitted the absence of ties with the Family Health Team (EqSF). Most pregnant women (55.6%) there was dissatisfaction in attendance held by AE/TE. Structure, Process and Result -Factor Analysis In Figure 1, according to factor analysis, noted that there was correlation β, for the following variables based on assumptions: After individual analysis of each construct of Donabedian (1994), one can infer that the variables selected for the study who composed the Structure did not influence statistically in the Process, demonstrated by r = -0.10. On the other hand, the variables selected for the study who composed the Process, influenced statis-tically about the Result. What can be observed when met β = 0.37 (Process over Result). To analyse the correlation of the Structure on the result, β = -0.13, which features a no significant statistical correlation (Structure on Result). Discussion According to Donabedian [5], the structure does mention all attributes, and organizational materials, which are relatively stable in the industry that provides the assistance [6]. According to the data presented, the pregnant women considered the UAPS welcoming offices and sorting service, however dissatisfied with the structural conditions of the waiting room. According to the Ministry of health (MS), the waiting room should be planned so as to provide a comfortable and pleasant, including adjustments of brightness, temperature, noise, positioning of the seats to provide interactAiodnebqeutawt eevnenintidlaivtiodnuailss a(8n).other [8] fundamental aspect in quality care to pregnant women, since this factor is essential to maintain the wholesomeness of UAPS environments. Thus, it is recommended that all environments with Windows or indirect ventilation (exhaust), allowing air circulation [8]. Another difficulty was the absence of a suitable place for carrying out health education activities, requiring improvisation in improper places, without the minimal conditions of comfort needed. This fact was observed in another study conducted in . However, managers recognize the importance of this space [10]. In relation to the process, Donabedian (1994) refers to activities developed by professionals with their clients, as well as the skill with which exert such assistance [11]. These were evaluated according to the organisational aspects, scientific-technical and interpersonal relationship [7]. In organizational terms, it was observed that the UAPS planned attendance of pregnant women for a specific day, the consultations were scheduled for the three shifts, and they were met by professionals of the area assigned, primarily by nurses. Aspects such as planning, implementation, monitoring and evaluation of educative actions in health are fundamental axes of joint activities between the community and health sTerhveicpesre(n6)a.tal [6] consultation is the time that leads to the healthcare plan of specific actions for each woman, according to the physical and psychosocial needs. The adherence and satisfaction of women at prenatal care are related to the quality of care provided by health professionals and services [12]. A study showed the average duration of the nursing consultation in PN that ranged from 15 to 20 minutes in subsequent queries, and 30 minutes on the first-time queries [12]. On the proposal of the study, it is worth mentioning the importance of active listening to this clientele, to provide the information they subsidize a APN of quality, in addition to strengthening the link between pro-fessional and user. The APN must overcome the noise, the communication and the discontinuity of the communicative process, basing its actions on Humanized care [10]. The technical and scientific aspects relating to knowledge, skills and practices of the protocolização health care through actions that aim to ensure the integral health care users, as well as minimize the risks, especially, nature clinical procedures, such as prescription drugs, procedures [6]. According to this study, the Nurse's best runs its activities to meet the needs of health, showing a greater interaction with pregnant women through the reception, listening, and humanized interface provided to the user. The educational actions reduce the asymmetry in the relationship between pregnancy-health services and improve the quality of prenatal care with consequent impact on maternal and child morbidity and mortality, especially in the perinatal period [13]. The host is related directly with the convenience and humanized the deal that the service provides the user, in addition to the operational dimension, of listening to the complaints and health needs, seeking precedent attention through the articulation of the network services. This aspect is fundamental as influences on the level of trust between provider and user, adherence to the indications, continuity in attendance, individual respect, satisfaction of users [2,6]. As for interpersonal communication, health outcomes depend largely on the level of information and communication that may exist during the practices. Aspects of relevance relate to information on the health-disease process, health risks, treatment, prognosis, prevention, side effects of medications, minimizing risks and health care [6]. For Donabedia [5], Result corresponds to the consequences of activities in health services, or by the professional, in terms of changes in the State of health of the patients, considering also the changes related to knowledge and behaviour's, as well as the user and worker satisfaction linked to receipt and provision of care [14]. Although, most pregnant women were satisfied with the care received, it should be noted that most of them had no link with the EqSF. However, this leads us to infer that the EqSF still needs to participate more effectively in the NPA, once the process of binding can be considered essential to the quality of care. Pregnant women who have said they are satisfied with the attention received, missed important criteria, such as the absence of effectiveness and efficiency, as describes Donabedian [15]. The factorial analysis provides tools to analyse the structure of correlations between variables or assumptions. Thus, it is concluded that the structure does not have a relevant impact on the process, not the result. However, the process has the greatest impact on the result, perceived by pregnant women. The use of evaluative processes, understood as critical-reflexive action, developed on the Organization, operation, procedures and working practices of management and service, contributes effectively to that managers and professionals have information and acquire necessary knowledge to decision-making aimed at the meeting the demands and health needs, with quality for the scope of the resolution of the systTemhearenfdorues,ertso ' esnathiasfnaccetiothne(6i).nteraction [6] between professional and patient is a primordial aspect of nursing care, and appears as an important step towards the success of the relationship between the two, as it is a fundamental tool to establish a relationship of care and assistance consistent with the needs of every pregnant woman [16]. Conclusion From the analysis of the data, it appears that the UAPS performed with a poor structure for prenatal care: The physical plant, material resources, the system of appointments, queues, and the care dispensed by EqSF and other employees, demanded a more thorough look, on the part of managers. As for the process, in the opinion of users, AE/YOU needed to include dialogue in their professional practice, as well as assist the nurses and doctors in carrying out the educational activities of prevention and damage control/health. It was evidenced that the nurses stood out in the implementation of the educational process, the ACS in setting bond, which is facilitatTedhebfyacthtoerfaenataulyresios-freitvseacletidvisttiaetsisitnictahlelyctohmatmthueniPtyr.ocess, by opinion of pregnant women, had greater impact on the Result. Therefore, when the operation of UAPS, the interventions and the relationships between users and professionals were adequate, provided greater satisfaction in pregnant women and, consequently, could contribute to the promotion of your health and well-being. In General, the results of this study may subsidize the (re) planning of actions inherent to APN by the managers and by EqSF, based on assumptions of Donabedian [5]. The quality of the APN is the basis for the promotion of women's health in the gravidpuerperal cycle and of the child, therefore for the reduction of maternal and perinatal morbidity and mortality.
2019-12-01T02:03:43.203Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "6912844b279da91f43bacc1e73abce61113742c1", "oa_license": "CCBYSA", "oa_url": "https://gavinpublishers.com/admin/assets/articles_pdf/1510577207article_pdf98983763.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9760bfd26012366fc2fbcaa08000687511c955f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246537707
pes2o/s2orc
v3-fos-license
Decomposing Satellite-Based Classification Uncertainties in Large Earth Science Datasets Collection of increasingly voluminous multi-spectral data from multiple instruments with high spatial resolution has posed both an opportunity and a challenge for maximizing their utilization, analysis, and impact. Obtaining accurate estimates of precipitation globally with high temporal resolution is crucial for assessing multi-scale hydrologic impacts and providing a constraint for development of numerical models of the atmosphere that provide weather and climate predictions. Precipitation type classification plays an important role in constraining both the inverse problem in satellite precipitation retrievals and latent heat transfer within weather prediction simulations. Precipitation type, however, is often reported deterministically, without uncertainty attached to an estimate. Machine learning techniques are capable of extracting content of interest from large datasets and accurately retrieving discrete and continuous properties of physical systems, but with limited insights to the retrieval components–such as errors and the physical relationship between the observed and retrieved properties. To address this shortcoming, we perform precipitation type classification to introduce a novel tool for decomposing errors of satellite-retrieved products. We use Bayesian neural networks to map Global Precipitation Measurement mission Microwave Imager observations to Dual-frequency Precipitation Radar-derived precipitation type, which perform comparably to deterministic models, but with the added benefit of providing well calibrated uncertainties. Through uncertainty decomposition, we demonstrate well calibrated uncertainties as useful for making decisions concerning high uncertainty predictions, model selection, targeted data analysis, and data collection and processing. Additionally, our Bayesian models enable mathematical confirmation of a data distribution change as the cause for an unacceptable decline in model accuracy. I. INTRODUCTION T HE Global Precipitation Measurement (GPM) mission [1] uses a constellation of passive microwave radiometers to offer a nearly global sampling of rain and snowfall rate estimates. The GPM core-observatory carries a passive Microwave Imager (GMI) [2] and an advanced Dual-frequency Precipitation Radar (DPR) system [3]. The two instruments are used to build a link between passive microwave (PMW) brightness temperatures and radar-derived precipitation rates. This link is then employed by an enterprise precipitation retrieval [4] to provide global estimates of precipitation rates. Driven by the globally-observed link, the retrieval delivers global precipitation estimates but suffers from region-specific biases [5] system morphology. If provided, the information about the precipitation type significantly mitigates this problem, as shown in [5] where a simple machine learning model was employed to predict precipitation class (i.e., convective vs. stratiform type). Convective rainfall is usually associated with stronger vertical motions and heavier rainfall than stratiform precipitation [6]. While demonstration studies confirm the great potential of machine learning (ML) methods in solving this particular problem, in order to operationally apply ML, a model must prove not only to be accurate but also to be capable of quantifying how predictive uncertainty varies when a model is applied to different types of precipitation systems (e.g., tropical cyclones, mesoscale convective systems, disorganized precipitation) across various regions on Earth. This becomes especially important when an enterprise retrieval is used, such as the Goddard Profiling Algorithm (GPROF) [4], which must operate over the entire globe as observed data distributions (i.e., brightness temperatures from different PMW radiometric bands) may change over time. The main drivers of variability in the information content are commonly seen in technical characteristics and age of the sensors used for the enterprise products, such as those from the GPM satellite constellation. Although the properties of each sensor are well understood, the effect of their variability on the retrieval performance with deep learning is not. Providing such information remains a challenge; however, this study offers one possible solution to the problem. Recently, data from the GPM mission was used to apply novel Bayesian deep learning (BDL) models to improve precipitation type classification of multi-spectral PMW observations of precipitation events [7]. Orescanin et al. demonstrated that BDL models can establish a stronger link between raw GMI data and precipitation system morphology over oceanbased precipitation events than deterministic deep learning (DDL) models. Furthermore, these BDL models outperformed the GPROF precipitation type product, part of the standard output of the currently operational precipitation retrieval for the GPM mission [4], [8]. The models successfully combined deep learning with Bayesian statistics to provide accurate precipitation type predictions while simultaneously providing useful measures of uncertainty [7]. Previously, BDL models have been shown to provide uncertainties in decision making, to learn useful information about small datasets, and to be more robust to overfitting to the training data than their deterministic counterparts [9]- [12]. Recent applications of BDL to remote sensing tasks include Active Learning tasks on Synthetic Aperture Radar (SAR) data [13] and hyperspectral imagery [11] using MC Dropout as a variational Bayesian approximation [10] with rudimentary convolutional deep learning models. Additional recent work focuses on seismic facies classification [14] using BDL with a simple convolutional model containing several hidden layers. The results from Orescanin et al. [7] provide a realistic real-world benchmark using a large Earth Science dataset that provides useful measures of the per-pixel uncertainties, quantified by predictive entropy, which has been a known gap in existing literature [12]. Further, those results demonstrated that high uncertainty was correlated with misclassified pixels [7]. The models had well calibrated uncertainties demonstrated by rejecting data points with high entropy values, which caused model performance on the remaining data points to increase. However, bulk or total uncertainty information, such as predictive entropy or variance [7], [12], fails to identify sources of uncertainty in the developed model. Der Kiureghian and Ditlevsen [15] characterize uncertainty as epistemic if it can be reduced or as aleatoric uncertainty if it can not be further reduced (e.g., caused by noise from the sensor). Our key contributions in this article are • Combining BDL with a meaningful real-world remote sensing application to create models with well calibrated uncertainties. • Decomposing uncertainty into its aleatoric and epistemic components [16] to make decisions about high uncertainty predictions, model selection, targeted data analysis, data collection/processing. • Providing a method to detect virtual concept drift using the components of the decomposed uncertainty. Additionally, we systematically benchmark several BDL methods, analyze the quality and consistency of aleatoric and epistemic uncertainty representations, and provide a visual example of handling high uncertainty predictions. We accomplished this by training a deterministic model and five different types of Bayesian models and measuring their accuracy on two temporally distinct case study datasets and one case study dataset with a distinctly different distribution. If shown as robust, this Bayesian approach to error decomposition will provide additional, much needed, information to allow for easier implementation of ML-models into satellite-derived multi-platform retrievals of atmospheric, oceanic, or terrestrial properties. A. Bayesian Deep Learning Many recent machine learning advances can be attributed to deep learning, using artificial neural networks with multiple hidden layers. However, these models are deterministic and do not provide information about the uncertainty of their outputs. By incorporating a Bayesian approach, it is possible to create models that provide information about uncertainty in prediction. This is achieved by replacing the weights of a neural network, θ, with a distribution that is updated as the model is developed on training data, D. Mathematically, the model weights are treated as a prior distribution, p(θ), and conditioned on the evidence, the distribution of the training data, p(D). When Bayes' Theorem is applied, the posterior distribution is: One of the main difficulties with applying a Bayesian approach is that the denominator in Eq. 1 often has no closed form solution and is computationally intractable [9]. As a result, an approximation of the p(θ|D) is computed instead. Variational inference is one method to approximate this posterior. The goal of variational inference is to create an optimization problem that identifies the distribution in a family of distributions, q * (θ) ∈ Q, that is least distant from the target distribution, p(θ|D). The measure of distance used is the Kullback-Leibler divergence (KL). The optimization problem is characterized by these two equations [17]: However, Eq. 3 still contains p(θ|D), which is intractable. To solve the optimization problem without explicitly calculating p(θ|D), Eq. 3 can be re-written as [17]: Evidence Lower Bound (ELBO) (4) Since the first term of Eq. 4 does not depend on q, it can be ignored to solve the minimization problem. Instead, the minimization problem is solved by maximizing the second term in Eq. 4, the evidence lower bound (ELBO). The optimization problem then becomes [17]: ELBO(q(θ)) = q(θ) log p(θ)p(D|θ) q(θ) dθ (6) In this study, this problem is further simplified by restricting Q to the fully factorized Gaussian distributions as described in [18]. This simplification allows for the application of the Flipout and Reparameterization methods of variational inference over the weights. Flipout prioritizes a more exact gradient computation over computational efficiency in comparison to other variational inference implementations [19], while Reparameterization prioritizes ease of computation when computing gradients [20]. In this article, these methods are compared to Monte Carlo (MC) dropout and a deterministic model implementation. MC dropout is equivalent to defining Q as Bernoulli distributions but without explicit KL divergence calculation [21]. During training, it is possible for the KL divergence to grow rapidly and prevent model convergence. Since the KL divergence is a penalty on the expected log-likelihood (see Eq 6). One way to address this problem is to decrease the penalty by placing a weight less than one on this term. In [22], this term is given a decreasing weight over the course of M mini- This KL reweighting scheme allows the prior to have a greater effect at the beginning of each epoch and the data to have a greater effect at the end of each epoch [22]. The goal of inference with BDL is to make a prediction, y, from new data, x. For a classification problem with c classes, the model provides the probability that y is a given class, p(y = c|x, θ). Since the weights of the models are distributions, the average probability is calculated by using Monte Carlo integration with N samples [12], [14]. Using our Bayesian models, we made 25 predictions (samples) for each input. The average probability per class (p c ) is calculated as: The class that yields the highest p c is chosen as the predicted class label. The same N predictions are also used to calculate the variance of p c , providing a measure of uncertainty. While having the total uncertainty is useful, knowing the source of the uncertainty is even more helpful. The total uncertainty can be expressed as the sum of the aleatoric uncertainty and the epistemic uncertainty [10], [15], [16]. • Aleatoric uncertainty is inherent in the data and cannot be reduced by providing the model more training data. • Epistemic uncertainty is attributed to the uncertainty in the model and can be reduced by increasing the amount of training data available in regions of greater epistemic uncertainty. Both [23] and [16] propose methods for estimating these individual uncertainties. However, the method described in [23] requires the use of extra variables to explicitly model the mean and the variance on the architecture output, which we call architectural decomposition, to calculate epistemic and aleatoric components. In contrast, [16] proposes a method to calculate epistemic and aleatoric components of the uncertainty without explicit architectural changes. Kwon et al. [16] compared both approaches for uncertainty decomposition on the task of ischemic stroke lesion segmentation. In their analysis of variance decomposition using the method in [16], when the prediction disagreed with the truth, high per-pixel uncertainty correctly identified regions that were misclassified for both false negatives and false positives. On the other hand, their analysis using architectural decomposition [23] did not yield useful information for the same task. In this work, we adopt the uncertainty decomposition approach of [16] and using the following formulation of aleatoric and epistemic components: B. Dataset Description This study uses the well established 12-month dataset collected over the oceans in 2017 and approach as in [7]. The standard GMI output provides brightness temperatures observations at 13 different channels, including both vertical (v) and horizontal (h) polarization, with varying FOV size. The available GMI frequencies and corresponding field of views appear in Table I observation vector over a 125 km × 125 km area centered on the observing Field of View (FOV), corresponding to a patch of 25×9 individual GMI pixels. Brightness temperatures were collected at these pixels at all of the 13 GMI channels and stored into 9×25×13 arrays. The arrays were then normalized using z-score scaling [24]. To ensure accurate matching between DPR-and GMIviewing geometries, each individual GMI pixel was labeled (convective or stratiform) by applying Gaussian weighting to DPR-observed precipitation rates [5] and calculating a convective fraction of precipitation volume within the GMI FOV. Pixels with a fraction of 50% or more were assigned a convective flag; the remaining pixels were labeled as stratiform. Noise in the dataset was minimized by removing observations containing any missing or non-classified data, comprising less than 5% of total data. The remaining ∼14 million samples were further split into training/validation/test data with an 80/10/10 ratio respectively, preserving roughly equal representation of both classes (i.e., forming balanced data subsets) [7]. Due to the balanced composition of these data subsets, we only present the accuracy in remainder of this article as a metric of model performance. Accuracy is calculated as: We treated the DPR-derived classification as the true label for determining whether or not a prediction was correct. Separately from the traditional training/validation/test datasets, with a goal to demonstrate trained models ability to generalize on unseen data, experiments were also conducted using two temporally independent, single swath case study datasets (Case 1 and Case 2), and one year of data collected over land (Case 3). Case 1 and Case 2 represent instantaneous observations of two separate precipitation events over ocean, captured by DPR and GMI sensors. Case 1 is a subtropical marine mesoscale convective system (MCS) located near shallow convection on 11 August 2018 over the North Atlantic. Case 2 is a section of Hurricane Lane observed southeast of Hawaii by the GMI on 19 August 2018. The scenes observed by GMI in the 18.7 GHz horizontally polarized band are depicted in Figure 1 with the smaller DPR swaths enclosed by the black and white lines. To test the models on input features with a different distribution, one year of global observations collected over land was used to form the Case 3 dataset. All three case study datasets were collected in 2018 and are temporally independent from the training/validation/test datasets. C. Model Architecture and Training Based on the results in [7], a residual network (ResNet) V2 [25] with 38 layers was chosen as a representative deterministic architecture to classify precipitation type as either convective or stratiform. Bayesian ResNet architectures were adopted in identical configuration as the deterministic architecture by following the approach in [26]. Bayesian ResNet model architectures with Flipout layers [19] and Reparameterization layers [20] were implemented utilizing the Tensorflow Probability library [27]. Model weights were initialized for training following He et al. [25]. The Adam optimizer was used with a starting learning rate of 0.001. The validation loss was monitored in order to conduct learning rate annealing [28]. The learning rate was reduced by a factor of 10 if there was no reduction in validation loss after 10 consecutive epochs. To regularize for overfitting, an early stopping strategy was employed [24]. If early stopping did not occur, training was terminated at 600 epochs. Our Bayesian models using a batch size of 128 required approximately 3 weeks to train on a single NVIDIA RTX 8000 48GB GPU. Both deterministic and Bayesian models were trained with the same strategy for the fairness of benchmarking. A. Effects of KL Reweighting on Optimization The results of early experiments indicated that the weight of the KL divergence term (see Eq. 6) for the Flipout and Reparameterization models needed to be reduced. Similarly to [22] and [29], we observed that the KL divergence term rapidly increased during the early stages of the training for the Flipout and Reparameterization models, preventing these models from converging. The results in this section show the accuracy of these models when the KL term is set to zero and when the KL term weight is reweighted according to Eq. 7. The model accuracy achieved when using the KL reweighting scheme in Eq. 7 is comparable to the model accuracy of MC Dropout, a Bayesian model where the KL divergence term is not calculated. The precipitation classification type derived from GPROF, the NASA operational passive microwave precipitation retrieval for the GPM mission, serves as the benchmark for experimentation. Table II lists the accuracy of all deep learning models, which achieve higher classification accuracy than the GPROF benchmark. For Bayesian models, the predicted class was determined using the mean probability of 25 predictions (N = 25 in Eq. 8). Table II lists the model type with KL weighting scheme followed by the classification accuracy of each model on the test set, a swath of the North Atlantic Ocean (Case 1), and a swath of the Pacific Ocean southeast of Hawaii (Case 2). On the test set, all of the Bayesian deep learning models performed comparably to or better than the deterministic ResNet38 V2 model (0.868 accuracy). For this dataset, setting the KL term to zero produced higher accuracy for the Flipout and Reparameterization models (0.927 and 0.920) than reweighting the KL term (0.866 and 0.864). Since the test and training dataset are subsets of the same dataset, these results indicate that setting the KL term to zero improved classification accuracy when the test data distributions are similar to the distributions within training data. However, while it is possible to control data distributions and splits during model development, such control is not feasible during live inference; there is no way to control the observed data distributions of a live sensor. All models were also applied to two case study regions of interest (accuracy listed in Table II) where we emulated live inference setting by choosing data significantly temporally separated from the test dataset split. On these two case studies, KL reweighting produced higher accuracy for the Flipout and Reparameterization models (0.794 and 0.834; 0.802 and 0.832) compared to a KL term of zero (0.756 and 0.814; 0.778 and 0.817). Since the case studies were temporally separate from the training data, this higher accuracy indicates that the reweighted KL term helps produce models that generalize better than when the KL term is zero. Furthermore, for the case study datasets, the models with KL reweighting performed comparably to MC Dropout (0.784 and 0.824), a Bayesian model where the KL divergence term is not calculated. B. Well-Calibrated Uncertainties One of the primary reasons to use Bayesian models is to make use of the uncertainty measures that accompany a prediction. According to [12], a model has well-calibrated uncertainty if its performance improves as more high-uncertainty predictions are discarded. As suggested by the results in Table II, reweighting the KL term plays an important role in producing models with well-calibrated uncertainties that generalize to unseen data because the accuracy on the Case 1 and Case 2 datasets is higher than when the KL term is set to zero. Well-calibrated uncertainties are useful for making decisions about predictions. The test set was served to each of the models for prediction. Next, the aleatoric and epistemic uncertainties of each test set prediction were calculated using Eq. 9. Table III contains the model type followed by the test set prediction uncertainty value for the 80th percentile of each type of uncertainty. There is relatively little change in the threshold value for the aleatoric uncertainty across all models (approximately 0.4), but the value for epistemic uncertainty thresholds is three orders of magnitude smaller for models with a KL term of zero (10 −6 and 10 −5 ), meaning these models do not capture as much of the epistemic uncertainty as when the KL term is reweighted (10 −3 and 10 −2 ). The epistemic uncertainty threshold values for the reweighted KL models (1.544e-03 and 1.033e-02) are comparable to the epistemic uncertainty threshold for MC Dropout (1.438e-02), a Bayesian model where the KL divergence term is not calculated. The values in Table III were used as thresholds to discard predictions made on the test set and the two case studies. The accuracy values reported in Tables IV-VI were calculated using only the predictions that had uncertainty values less than or equal to the values in Table III. For example, in Table IV, the Flipout model with a KL term of zero had an accuracy of 0.927 that was calculated using all test set predictions; an accuracy of 0.972 that was calculated using only predictions with epistemic uncertainty less than or equal to 4.210e-06; and an accuracy of 0.975 that was calculated using only predictions with aleatoric uncertainty less than or equal to 3.831e-01. After removing predictions with uncertainty values above the threshold values, the same accuracy trend appears between the test set and the two case studies that appeared when the accuracy was calculated using all predictions. A KL term of zero led to higher accuracy on the test set. However, this accuracy did not generalize as well to the case studies. A reweighted KL term produced higher accuracy on the case studies. The reappearance of this trend reinforces the conclusion that the reweighted KL term helps produce models that generalize better that when the KL term is zero. With the exception of the epistemic uncertainty associated with Case 1 predictions made by the Reparameterization model with a KL term of zero, all other uncertainties are well calibrated since model accuracy improved after removing high uncertainty predictions. This demonstrates the additional utility that Bayesian models provide over deterministic models. A deterministic ResNet cannot provide a measure of uncertainty about its predictions, let alone provide a measure of whether or not the uncertainty is well-calibrated. Not only can Bayesian models provide a prediction with uncertainty metrics, it is possible to determine if these uncertainties are well calibrated. Model predictions accompanied by well calibrated uncertainties allow for a decision to be made about whether to keep the prediction, to discard the prediction, or to pass the prediction to another system or a human being, whichever is more useful for the task at hand. C. Predictive Implications of Well-Calibrated Uncertainties Having established that the uncertainties of the models are well-calibrated, these uncertainties can now be used for the task at hand, precipitation type classification. Figure 2a depicts the variance (sum of aleatoric and epistemic uncertainty) associated with each prediction for Case 2. Figures 2b and 2c show the separated uncertainty types as defined by Eq. 9. Brighter colors represent higher levels of variance and uncertainty. Figure 2d and e display the predictions from the GPROF algorithm and the DPR-derived labels (used as true label for training). Figure 2f details the predictions for the Flipout KL Reweighting model on Case 2; lighter shades of blue and red represent classifications with epistemic uncertainty above the threshold of 1.544e-03 from Table III. When compared side-by side, the aleatoric and epistemic uncertainty maps show exactly how much of the predictive uncertainty is caused by the dataset (aleatoric) and how much is caused by the model (epistemic). The scales of the uncertainty maps show that the majority of the uncertainty is a result of the aleatoric component; even the brightest portions of Fig. 2c are an order of magnitude smaller than the darker portions of Fig. 2b. When viewed in conjunction with the prediction map (Fig. 2f), the aleatoric and epistemic uncertainty maps in Fig. 2b and c provide information beyond what is available when viewing the prediction map in isolation. In a similar fashion to [16], when the spatial predictions (Fig. 2f) disagree with the DPR-derived labels (Fig. 2e), the epistemic uncertainty (Fig. 2c) identifies the incorrect classifications because of the larger magnitude of these uncertainties compared to the rest of the epistemic uncertainty map. From 12 • to 14 • latitude and from -143 • to -142.5 • longitude, the Bayesian model overpredicts convective precipitation, but the epistemic uncertainty map identifies many of these predictions as high uncertainty (depicted in pink on the prediction map). This same type of prediction error occurs along 15 • latitude, where the Bayesian model over-predicts convective precipitation, and again, the epistemic uncertainty map identifies these predictions as high uncertainty (depicted in pink in Fig. 2c). If a downstream application requires high accuracy predictions, a decision could be made to discard these types of predictions since they have both high epistemic uncertainty and high aleatoric uncertainty. However, in some instances, it may be beneficial to keep predictions with low epistemic uncertainty, but higher aleatoric uncertainty (see Fig. 2b and c at 13 • latitude, -141.5 • longitude). For these predictions, the model has low uncertainty (epistemic) about its prediction despite noise that is inherent in the data (high aleatoric uncertainty). When making these types of decisions, the uncertainty-source component is of particular interest. Compared to deterministic models, this new information about model predictions provides the ability to make informed decisions about how to handle high uncertainty predictions based on the level and the source of the uncertainty. Furthermore, such a decision cannot be made when the variance alone is considered. D. Utility of Uncertainty Decomposition Beyond Prediction The results of these experiments offer more than establishing whether or not model uncertainties are well calibrated and making decisions about predictions using well-calibrated uncertainties. Figure 3 shows the mean aleatoric uncertainty (Fig. 3a) and epistemic uncertainty (Fig. 3b) for the predictions made by each model on each dataset. All models have similar levels of aleatoric uncertainty across each dataset (∼0.19 for the test set, ∼0.22 for Case 1, ∼0.13 for Case 2). However, the models differ with respect to amount of epistemic uncertainty for each dataset (∼0.0005-0.005 for the test set, ∼0.001-0.0085 for Case 1, ∼0.00075-0.0055 for Case 2). Across datasets, there is a general trend in epistemic uncertainty. MC Dropout has the highest epistemic uncertainty; Reparameterization with KL reweighting has the second highest; and Flipout with KL reweighting has the lowest. By decomposing the variance into aleatoric and epistemic uncertainties, these measures enable informed decision-making about model selection, data collection/processing, and targeted data analysis. Visualizing the uncertainty decomposition in this way can be useful when making decisions about model selection and data collection. Since all models in Fig. 3 equally represent the aleatoric uncertainty and have comparable accuracy, it is prudent to select the Flipout KL Reweighting model for deployment because it has lower epistemic uncertainty values and a smaller range of epistemic uncertainty (∼ 0.0005-0.001) across the datasets compared to MC Dropout (∼ 0.005-0.0085) and Reparameterization with KL reweighting (∼ 0.003-0.006). From this visualization, it is also possible to gain insight about what type of data to collect. If it is possible to collect more data, the epistemic (model) uncertainty values in Figure 3 indicate that data similar to Case 1 (cyan) would be more beneficial than data similar to Case 2 (green) because the epistemic uncertainty values for Case 1 are higher and epistemic uncertainty can be reduced with more training data. This observation is strengthened when taking the dataset sizes into account. Case 2 epistemic values are close to the Test Set values, but the Test Set has close to 1000 times more samples. Case 1 is only about 200 samples smaller than Case 2. New data similar to Case 1 or augmentation of existing Case 1 data will lower the epistemic uncertainty for each model, which could lead to models that generalize better in live inference than the current models. These types of conclusions cannot be drawn when using a deterministic model since it provides no measure of uncertainty about its predictions. Additionally, these conclusions cannot be made without decomposing the variance into aleatoric and epistemic uncertainty, particularly when the epistemic uncertainty values are much smaller than the aleatoric values (see Table III and the scales of Fig. 3a and b). This is also true when analyzing Bayesian model performance with predictive entropy, a bulk uncertainty metric, such as in Orescanin et al. [7]. This same type of analysis can also be useful to identify when targeted data analysis may be useful. All considered models were less accurate when run on the case study datasets than when run on the much larger training set (see Table II). For Case 1, this difference in accuracy can be explained by the models not seeing enough data during training that is similar in underlying distribution to the data in Case 1 since the mean epistemic uncertainty values in Fig. 3 are higher for Case 1 than for the Test Set. The same holds true when running the MC Dropout model and the Flipout KL Reweighting model on the Case 2 dataset for which the mean epistemic uncertainty values in Fig. 3 are also higher for Case 2 than for the Test Set. However, the Reparameterization KL Reweighting model has lower epistemic uncertainty for Case 2 than for the Test Set. This means the model is less uncertain about its predictions when run on the Case 2 than when run on the Test Set, but it is getting the prediction wrong more often for Case 2 (accuracy 0.832) than for the Test Set (accuracy 0.864). Seeing this curious trend, the false positives and the false negatives for the Reparameterization KL Reweighting model can be aggregated and further analyzed. In this case, it is possible that the false positives and false negatives arise because the model is applied to the inner core of a tropical cyclone, where the precipitation is dynamically neither completely convective nor stratiform. Regardless of the reason, this is yet again another type of observation that cannot be made when using deterministic models or without decomposing the uncertainty. By exploring the relationship between aleatoric and epistemic uncertainty in another way, it is possible to have still more insight into model selection. In Fig. 4, each line indicates the expected epistemic uncertainty for a prediction given that the aleatoric uncertainty for the prediction is less that or equal to the value on the abscissa. This demonstrates that as aleatoric uncertainty (inherent in the data) increases epistemic (model) uncertainty increases. However, the magenta (Reparameterization KL = 0) and cyan (Flipout KL = 0, hidden by magenta) lines indicate that no change in epistemic uncertainty occurs as aleatoric uncertainty increases when the KL term equals zero. In other words, when the KL term is zero, the model fails to represent the epistemic uncertainty in a meaningful way. The lines for the other models have greater slope that corresponds with an increasing effect on model uncertainty by noise inherent in the data. This trend reinforces the choice of the Flipout KL Reweighting model (red lines) for deployment since it has low epistemic uncertainty even as aleatoric uncertainty increases and has accuracy that generalizes to Cases 1 and 2. The models with KL equal to zero, on the other hand, have low epistemic uncertainty, but their accuracy does not generalize to Cases 1 and 2 (see Table II). Decomposing the variance into uncertainty types provides the opportunity for yet one more insight, identifying challenges due to data collection and processing. When the observed trend in Fig. 4 is combined with two orders of magnitude Table III) shown in lighter shades. Note that the range of magnitudes for variance and aleatoric uncertainty are similar (0.0-0.5) and much larger than the values for epistemic uncertainty (0 to 0.01). difference in uncertainty values seen in Fig. 3, it can reasonably be concluded in this example that reducing the aleatoric uncertainty would likely yield higher accuracy since the models that have higher aleatoric uncertainty (MC Dropout and Reparameterization with KL reweighting) also have accuracy comparable to the Flipout with KL reweighting model. Having made this conclusion, attempts could now be made to reduce the aleatoric uncertainty, such as by re-calibrating collection sensors, augmenting the existing data with new features, or preprocessing the data differently to increase the signal to noise ratio. Without knowing that the aleatoric uncertainty far outweighs the epistemic uncertainty, it would be impossible to understand that reducing the noise inherent in the data provides more opportunity to improve model accuracy than providing a model more training data. E. Virtual Concept Drift Detection Virtual concept drift occurs when the data distribution changes so much that model error is no longer acceptable [30]. Case 3 consists of one year of observations collected over land surface across the entire globe. We expect that the distribution of GMI brightness temperatures over land is completely different than that over ocean because the emissivity of land is significantly different from that of water. Table VII shows the accuracy of the models, which were trained on data collected over oceans, on this third case study data. Because this dataset was balanced, these accuracy results (∼ 50%) are akin to guessing the correct class. Seeing this dramatic decrease in accuracy may be an indicator that virtual concept drift has occurred. The decomposed variance can help confirm this intuition. Unlike the previous case studies and the test set where the epistemic uncertainty was much smaller than the aleatoric uncertainty (∼ 0.4 difference), the aleatoric and epistemic uncertainties for Case 3 are much closer in magnitude (< 0.23 difference); the models with the KL term set to zero even have epistemic uncertainty that exceeds the aleatoric. These higher values of epistemic uncertainty suggest that the accuracy is suffering because the model did not see enough similar data in training. In Fig. 5a, the aleatoric values of Case 3 (yellow) are similar to the test set and the other case studies. This makes it unlikely that the decrease in accuracy is due to sensor degradation or some other source of noise inherent in the data. Furthermore, in Fig. 5b, the epistemic values are much higher for this dataset. These high values confirm that the Case 3 data is indeed not part of the same distribution as the model development datasets. This is to be expected given the contrast in brightness temperature (i.e., input features) distributions originating over radiometricaly cold ocean-and warm landsurfaces; however, during live inference, this difference would not be known ahead of time. Separating the uncertainties mathematically confirms that this is indeed virtual concept drift. This confirmation allows model developers to focus decisions on how to handle this new distribution, instead of blindly adding more data to development data splits or conducting time consuming hyperparameter tuning (three weeks for these models). IV. SUMMARY AND CONCLUSION Machine learning techniques can efficiently extract discrete and continuous properties of physical systems. However, existing techniques have limited ability to provide insights to errors and the physical relationship between the observed and retrieved properties, a major downside of their application. In the present study, we use a problem of detecting precipitation type from satellite observations to introduce a novel tool for decomposing errors of satellite-retrieved products and allow for better understanding of the links between observed and retrieved features. Typically, precipitation type is reported without quantitative uncertainty attached to an estimate. We use Bayesian models to classify precipitation type by mapping Global Precipitation Measurement mission Microwave Imager observations to Dual-frequency Precipitation Radarderived precipitation type. These Bayesian models perform comparably to deterministic models, but with the added benefit of well calibrated uncertainties. Well calibrated uncertainties are useful for making decisions concerning high uncertainty predictions, model selection, targeted data analysis, and data collection and processing. Additionally, our Bayesian models enable mathematical detection of virtual concept drift, which occurs when the data distribution changes so much that model error is no longer acceptable [30]. From a pool of ∼ 14 million samples collected in 2017, we created a development dataset that we used to create traditional training/validation/test datasets with an equal representation of each type of precipitation. To simulate live-inference, we used two temporally independent, single-overpass case study datasets (Case 1 and Case 2) from 2018. Case 1 is a subtropical marine mesoscale convective system (MCS) located near shallow convection. Case 2 is a section of Hurricane Lane observed southeast of Hawaii. To observe model behavior on data with a different distribution, we used a third case study dataset (Case 3) comprised of one year of global observations collected over land in 2018. In our experiments, we developed Bayesian models using the evidence lower bound (ELBO in Eq. 6) as our loss function, which is dependent on the KL divergence term between the prior and posterior distributions. The KL divergence can grow rapidly and can prevent a model from converging during training. We adopted a KL reweighting scheme (Eq. 7) to control the KL divergence term during optimization. Our results (Table II) indicate that a reweighted KL term helps models achieve accuracy that generalizes better to live inference than setting the KL term to zero. The models with a reweighted KL term had comparable accuracy to the MC Dropout model, a Bayesian model where the KL term is not explicitly computed. All our Bayesian models also had well calibrated uncertainties that proved useful for making decisions about high uncertainty predictions. This information can be used to decide to keep a prediction, to discard a prediction, or to pass a prediction to another system or a human being, whichever is more useful for the task at hand. By decomposing the uncertainty into aleatoric and epistemic components, decisions can be made about how to handle high uncertainty predictions. Figure 2 provides a visualization of different uncertainty metrics and high uncertainty predictions. By separating out the epistemic uncertainty, we identified the Flipout with KL reweighting model as the model most ready for deployment because it had comparable accuracy to all other models but with a lower and smaller range of epistemic uncertainty across the test set and the two case studies (Case 1 and Case 2). Using this same analysis of epistemic uncertainty, we also concluded that false positives and false negatives from the Reparameterization with KL reweighting model were candidates for targeted analysis. Furthermore, we were able to conclude that samples similar to Case 1 would be more beneficial for future training. Analysing the aleatoric uncertainty of predictions made using data collected over the ocean showed that the aleatoric uncertainty far outweighed the epistemic uncertainty. Knowing this fact, attempts can be made to reduce the aleatoric uncertainty, such as by re-calibrating collection sensors, augmenting the existing data with new features, or preprocessing the data differently to increase the signal to noise ratio. The aleatoric uncertainty of predictions made using data collected over land surface was similar to the other datasets collected over the ocean. This similarity coupled with much higher epistemic uncertainty that the other datasets provided a mathematical means to verify that concept drift (a change in distribution) caused the accuracy to plummet. For precipitation type classification, Bayesian deep learning models perform comparably to their deterministic counterparts. Decomposing the uncertainty available from these Bayesian deep learning models allows users to make informed decisions concerning high uncertainty predictions, model selection, targeted data analysis, data collection/processing, and virtual concept drift. The ramifications of these capabilities for just atmospheric science applications are potentially wideranging. For example, if Bayesian neural networks are applied to regression tasks (i.e., predicting microwave brightness temperature using infrared radiances), the uncertainty included may inform proper weighting of insufficiently certain predictions of synthetic values of commonly assimilated fields into global numerical models of the atmosphere. Features associated with large epistemic uncertainties highlight areas for which additional observations could be beneficial. For example, in this article, the model could improve by training on additional observations of deep convection in tropical cyclones. If predictions using the same model are applied to the same instrument over time (i.e., several years), increasing aleatoric uncertainty could be an early indicator that various issues (e.g., sensor malfunctions, orbital drift) are causing degradation to predictions. None of these are possible using traditional deterministic models.
2022-02-05T16:46:22.934Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "f4b5e7a687691debbaa32844ba2cdcf6e535b664", "oa_license": "CCBY", "oa_url": "https://www.techrxiv.org/articles/preprint/Decomposing_Satellite-Based_Classification_Uncertainties_in_Large_Earth_Science_Datasets/19071557/1/files/33897347.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "35cb85ebbbe595bc1692eb3f34cbeea1af2cff33", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
81344757
pes2o/s2orc
v3-fos-license
Localized subglottic laryngeal amyloidosis: a case report Amyloidosis is a heterogeneous group of diseases characterized by accumulation of amyloid protein in different organs of the body. Localized affection of the larynx by amyloid deposits is a rare event. This is a case of localized laryngeal amyloidosis in a 24-year-old female patient presented with hoarseness of voice. The amyloid mass was occupying the anterior part of the subglottic region and reaching up between both the vocal folds. Complete excision was achieved by cold instruments. Localized laryngeal amyloidosis despite being rare should be kept in mind as one of the causes of mass lesions in the larynx. Introduction Amyloidosis is a group of disorders that are characterized by extracellular deposition of abnormal proteins in various organs of the body which eventually can lead to their failure [1]. Vircow used the term amyloid to describe these abnormal proteins because of their starchlike reaction when treated with iodine and sulfuric acid [2]. Amyloidosis is a very rare systemic condition. In cases of localized amyloidosis of the head and neck region, the larynx is considered the commonest site of affection [3]. It usually affects age groups between 40 and 60 years and with a male to female predominance of 2 : 1.4 [4]. In this case report, we present a 24-year-old female patient with a complaint of hoarseness of voice that turned to be caused by localized amyloidosis. Case report A 24-year-old female patient presented to our Otorhinolaryngology Unit with a gradual onset of hoarseness of voice of 3 months duration not associated with dyspnea or cough. She is not a smoker with no history of alcohol consumption. She is not diabetic or hypertensive and her medical history is unremarkable. There is no weight loss, according to the patient's own words. General and local external neck examination showed no abnormality. Flexible laryngoscopy showed a subglottic mass just inferior to the anterior commissure and occupying all the length of the subglottis anteriorly with superior extension that comes between the two vocal cords with phonation. The mass was broad based, pinkish, with smooth surface, and no ulceration (Fig. 1). Computed tomography with contrast was done ( Fig. 2), which showed a small, soft-tissue nonenhanced lesion below the anterior commissure and extending to the subglottis with normal thyroid cartilage and thyroid gland and subcentimetric bilateral lymph nodes in the neck. A decision of microlaryngosurgical biopsy of the mass was taken for histopathological consultation. An This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. informed consent was taken from the patient after explanation of the procedure. All the preoperative routine labs like complete blood count, kidney and liver function tests, serum electrolytes, and erythrocyte sedimentation rate were normal. Under general anesthesia, microlaryngoscopy was done which showed the same criteria of the mass as seen by flexible laryngoscopy. Mass was biopsied first with no considerable bleeding and for the same complete excision was done. The tissue taken was sent for histopathological examination which showed homogeneous amorphous eosinophilic deposits in the subepithelial stroma (Fig. 3). It was Congo red positive (Fig. 4) and showed apple-green birefringence under polarized light with Congo red stain which is consistent with amyloid deposits with no signs of malignancy (Fig. 5). Also, the lesion showed positive staining with periodic acid-Schiff and periodic acid-Schiff-D and negative mucicarmine. The patient withstood the operation well and the postoperative period was uneventful. She was discharged from the hospital in the next day with no respiratory difficulty and no hoarseness of voice. She was investigated later on for exclusion of systemic amyloidosis. Chest radiography, echocardiogram, erythrocyte sedimentation rate, renal and liver functions, and complete blood count were normal. Urine analysis and 24 h collection of urine for protein were also normal. Her Mantoux test and sputum acid-fast bacilli were negative. The patient was followed up for more than 18 months after Mass (subglottis below the region of anterior commissure) as seen with contrast computed tomography. Figure 3 Eosinophilic deposits in the subepithelial stroma. surgery with no change of voice, no other clinical manifestations, or other examination findings suggesting recurrence of the disease. Discussion Amyloidosis represents a variety of conditions, characterized by extracellular deposition of abnormal, insoluble protein fibrils which can lead eventually to failure of organs and systems [5]. It may acquire systemic or localized forms, yet the latter is so rare. Laryngeal amyloidosis remains a rare entity accounting for about 1% of all benign laryngeal tumors [1]. The first case of laryngeal amyloidosis has been reported by Burow and Neumann in 1875 [6]. Despite that the immunoglobulin nature of localized amyloid is accepted generally, its pathogenesis is still unclear [7]. Laryngeal amyloidosis can simulate many other benign and even malignant lesions in the larynx like laryn gocele, schwannoma, carcinoma, rhabdomyosarcoma, hemangioma, and pemphigus and because of that fact it should be put in the differential diagnosis of laryngeal masses despite its rarity. In the case presented in this study, the lesion was smooth and no ulcerations were noted. Also, it was firm as palpated by the instruments intraoperatively. Removal of the lesion totally was done without endangering the vocal cords. A case of familial primary localized laryngeal amyloidosis in two sisters had been documented by Oguz et al. [8]; however, familial primary localized amyloidosis of the larynx is a very rare finding. The case in this study showed no family history of such a condition. The most common presentation of laryngeal amyloidosis is changes in voice [9] as has been met in our case. Amyloid protein is detected histologically by staining the samples with Congo red stain which gives an apple-green appearance to the amyloid material. Also, in polarized light amyloid appears birefringent. It has been documented that there is no need for extensive investigations to rule out systemic amyloidosis as long as no systemic manifestations are found [10]. Cases with laryngeal amyloidosis need long-term treatment due to the slowly growing nature of the lesion. The case presented in this report showed no recurrence over a period of 18 months.
2019-03-18T14:04:09.527Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "ff247c0348aeea70ee9b696606879645fd2bab3c", "oa_license": "CCBY", "oa_url": "https://ejo.springeropen.com/track/pdf/10.4103/ejo.ejo_74_17", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "15fd41e1e868ffa9e4d7fc4902c83989a8ec5df6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195993640
pes2o/s2orc
v3-fos-license
Fabrication and Mechanical Properties of Glass Fiber/Talc/CaCO3 Filled Recycled PP Composites 1Department of Metallurgy and Materials, Sakarya University, Faculty of Technology, Esentepe Campus, 54187 Sakarya, Turkey 2Department of Manufacturing Engineering, Faculty of Simav Technology, Dumlupınar University, Turkey 3Polipro Plastik San. ve Dış Tic. A.Ş, R&D director, Kocaeli/Turkey, Turkey 4Department of Industrial Design Engineering, Faculty of Simav Technology, Dumlupınar University, Turkey Introduction Currently, environmental alarming has led to a renovated concern in usual materials and problems like recyclability and ecological care are happening gradually significant for the initiation of novel materials and artifacts (Arbelaiz et al., 2005). Trash is sensed as a main issue, particularly for vast expenditure polymers such as polypropylene (Brachet et al., 2008). Plastic recycling has been effectively applied with main treated materials causing in economic profits (Da Costa et al., 2007). In the current study, recycled PP was used as a matrix for devising the composites. Recycled PP is obtainable as a trash material and generates many ecological harms. To decrease the ecological harms, PP can be reprocessed to fabricate new value added artifact with little fabrication price (AlMaadeed et al., 2012). Due to its decent mach inability, excessive recyclability and little price, PP has been noticed a wide range of claims in the textile, packaging, car and furniture productions. To enlarge the variety of claims, growing the strength and modulus of PP has revived significant concern. Discharging PP with hard inert specks is an operative, inexpensive and suitable method to increase the strength and stiffness (Zheng et al., 2009). Generally, the mechanical properties of recycled mixed polymers are evidently developed by summing fibres have ample greater strength and rigidity than those of the matrices (Homkhiew et al., 2013). In this study we used glass fibre, calsite and talc as reinforcement for making the composites. Several authors have investigated the recycling of polypropylene (Beg and Pickering, 2008). Dintcheva et al. (2001) studied the influence of three dissimilar fillers specifically glass fibres, calcium carbonate and wood fibres with the polyolefin reprocessed plastic and reported that glass fibres overtakes others by growing its tensile modulus from 460 to 935 MPa with 20% glass fibre. Nevertheless, the drop of elongation at break was detected. A comparable investigation was directed by Putra et al. (2009) on the influence of glass fibre, talc, wollastonite and gypsum on the mechanical properties of recycled-plastic composites. Both fillers and reinforcement displays growth in the toughness of the materials. Their investigation proved that glass fibres demonstrate the finest outcomes by refining its tensile and flexural strength by 30% and the important growth in modulus by 250% with the adding of %30 glass fibre. The addition of glass fibre from 10-30% by weight results in considerable growth in elastic modulus, along with a rise in strength with abridged ductility (Xanthos et al., 1995). According to the Bank's article (Lawrence, 2006), it can be understood that with the adding of 30% glass fibres, the elongation of break can decrease from 25 to 2%. Another study conducted by Gupta et al. (1989) focused on the influence of high temperature on the tensile properties of unreinforced and reinforced PP. They observed brittle behavior at inferior temperatures whereas it displays ductile behavior at upper temperature. The results of the study further designates that there is a fall of in excess of 50% tensile strength for the temperature of 20 to 55°C for together reinforced and unreinforced specimens. Conversely, the decrease in together tensile strength and modulus at raised temperature is under that of unreinforced specimens comparing to the glass fiber reinforcements. The researchers further reported that with the increment of test temperature, the interfacial shear strength decreased for all composite specimens which could ascend either from a flagging of chemical bond or from the reduction of thermal stresses and the decreased of bond strength at the fibre-matrix interface. The mechanical properties of recycled mixed plastic leftover with the glass fiber displays better development as detected by Xanthos and Narh (1998;AlMaadeed et al., 2012;Hugo et al., 2011;Putra et al., 2009;Dintcheva et al., 2001) have stated that the melt flow index decreases with the increment of the fiber content owing to the great weight and the increment of the viscosity of the composite. A comparable state was described by Hugo et al. (2011) who stated that because of high price and treating restrictions for the recycled composite, the extreme quantity of glass fiber which could be combined in the artifact is 30% by weight. In this study, recycled polypropylene granules (G-PP) were obtained from companies and glass fiber/talc/CaCO 3 additive G-PP granules in different ratios were produced with compounding extrusion process. These produced new composite pellets were produced in injection molding machine by means of designed mold to perform tensile and impact tests. Materials A Recycled Polypropylene Granules (RPP) obtained from the Polipro Company was used as the polymer matrix material having a density of 0.92 g/cm 3 . Talc and CaCO 3 additive materials have been obtained from the Omya Company. Commercially available glass fiber with 4.55 µm-1 mm average length and 2.9 g/cm 3 average density was used as reinforcement for making the composites. Composite Preparation Talc, CaCO 3 and glass fibers filled recycling PP polymer composites are produced using Brabender twin screw extruder. The extruder temperature was used in the range of 190-230 °C for filled RPP. RPP samples were prepared for 30% of talc, CaCO 3 and glass fiber. The polymers mixtures were extruded, cooled and then granulated. Test samples were prepared by Arburg injection molding machine. Morphological Analysis Morphological analysis was carried out with the utilization of a Scanning Electron Microscope (Nano SEM 650). The cross-sections of the samples were examined after tensile tests to investigated the fracture and bonding between fillers and matrix. Tensile Properties The tensile tests were accomplished consistent with the ASTMD 638-02 standard, by utilizing a Universal Testing Machine at a crosshead velocity of 10 mm/min. The standard dimensions are 20 mm length, 12.5 mm width and 4 mm thickness. The Young's modulus was determined routinely by software, Lloyd-LC Instruments by choosing tangent technique. The tensile test direction was uniaxial and five samples were taken for the tensile test. In each case, five samples were tested and the mean value was declared. Impact Properties Notched Charpy impact tests were carried out with a impact tester with pendulum energy of 15 J, according to ASTM D 256. All the reported values for the impact tests were the average values of 5 samples. Thermal Testing Differential Scanning Calorimetry (DSC) The RPP and RPP composites were analyzed by DSC using a thermal analyzer (Setaram-Labsys evo). The measurements were taken by keeping fixed heating ratio of 10°C/min under nitrogen atmosphere. The crystallinity fraction (X c %) of the RPP and RPP composites was determined by means of the subsequent equation: W polymer = The weight ratio of polymer matrix ∆H m = The heat of fusion ∆H m 0 = The heat of fusion of 100% crystalline PP (207.1 J/g) (Velasco et al., 2002;Wal et al., 1998) Thermo Gravimetric Analysis (TGA) Thermo gravimetric analysis (TGA) measurements were performed in nitrogen atmosphere using a Setaram-Labsys evo instrument at a heating rate of 10°C/min. Melt Flow Index Melt Flow Index (MFI) is defined as the weight of the polymers in grams extruded in 10 min through capillary of specific dimensions by pressure applied through dead weight via piston. Melt flow index was conducted using a Ceast model machine. ASTM D1238-Method A was applied for determining the MFI of the recycled polymers. Morphological Study The morphologies of talc, calcium carbonate and glass fiber reinforced polymer composites were examined by Scanning Electron Microscopy (SEM) on the fracture surface of the tensile specimens. The samples were covered with a thin film gold and analyzed at an accelerating voltage of 1.00 kV. TGA Analysis Results The Polypropylene (PP) polymer and talc, calcium carbonate and glass fiber reinforced polymer composites samples are given in Fig. 1 for thermal decomposition curves (thermo gram). While the temperature at which the degradation of the PP polymer of approximately 380°C, talc, CaCO 3 and glass fiber reinforced PP polymer composites begin to decompose at approximately 400°C. The decomposition temperature depends on the type of additives increased on average by 5.2%. Similar results for glass fiber and wood flour were obtained from recycled polypropylene reinforced hybrid composites (AlMaadeed et al., 2012). In this case, the additives were added to PP polymer improve to the thermal properties of polymer and demonstrating that provides possibility for use at higher temperatures. Analysis of results losses of mass in the PP polymer composites samples were found around 70%. DSC Analysis Results In Table 1, the DSC analysis results are given for the samples of RPP polymer, RPP+30% talc, RPP+30% CaCO 3 and RPP+30% GF polymer composites. As seen from the table, the melting temperature (T m ) and enthalpy (∆H m ) and crystallization rate (% X c ) of RPP polymer and doped RPP composites was obtained. There is no difference in T m of RPP polymer composites. As seen from the table that crystallization degree of talc, CaCO 3 and glass fiber reinforced RPP polymer composites was obtained lower than that of undoped RPP polymer. While crystallization degree of RPP polymer was obtained as 36.87%, its crystallization degree was attained as 36.35, 30.83 and 29.74% with the addition of talc, CaCO 3 and glass fiber, respectively. The fillers prevent the movement of the PP macromolecular chain and the macromolecular section from getting ordered position of the crystals. The fillers hinder development of crystallinity. Consequently, the X c of polymer composites are reduced (AlMaadeed et al., 2012). Mechanical Tests The Recycled Polypropylene (RPP) polymer, 30% talc filled RPP, 30% calcium carbonate filled RPP and 30% glass fiber Reinforced Polymer (RPP) composite samples were given in Fig. 2 for tensile strength results. While tensile strength of RPP polymer was obtained as 30 MPa, its tensile strength decreased with the addition of talc and CaCO 3 and attained as 23 and 20.6 MPa, respectively. The tensile strength of the RPP polymer decreased to 23% and 31% for talk and CaCO 3 respectively. This behavior could be generally caused by the reduced interaction at the interface level between the fibers and the polymer matrix (Mutje et al., 2006). The tensile strength was obtained as 68MPa with the increment of 126% by the addition of 30% glass fiber into RPP. Similar results were also obtained by AlMaadeed et al. (2012;Beg and Pickering, 2008;Zheng et al., 2009;Serrano et al., 2014). Fig. 3, the yield strength results were given for RPP polymer, RPP+30% talc, RPP + 30% CaCO 3 and RPP+30% GF polymer composite samples. Similar to the results of tensile strength, the yield strength of RPP polymer decreased to 15.3 and 26.9% with the addition of talc and CaCO 3 , respectively. This is because; talc and CaCO 3 additives are separated easily from the matrix due to the creation of a weak customization interface. The yield strength of GF reinforced composites is higher because it establishes a bond with the PP matrix at the same contribution rate. The yield strength of RPP+30% GF polymer composite increased to 142% compared to the RPP polymer. Similar results were also obtained by Li et al. (2012) in the earlier work. Fig. 4, the results for modulus of elasticity are given for the samples of GP polymer, RPP+30% talc, RPP+30% CaCO 3 and RPP+30 GF polymer composites. The modulus of elasticity increased with talc, CaCO 3 and glass fiber into the RPP polymers. The 1 GPa elastic modulus of RPP polymer increased in 42 and 52% and reaching to the values of 1.5GPa and 1.6 GPa with the addition of talc and CaCO 3, respectively. By the accumulation of GF additives, the elastic modulus of RPP polymer increased to 300% and reaching to 4.2 GPa. The present results are consistent with the previous studies reported by (AlMaadeed et al., 2012;Beg and Pickering, 2008;Zheng et al., 2009Serrano et al., 2014Li et al., 2012). In Fig. 5, the Izod impact strength results are given for the samples of RPP polymer, RPP+30% talc, RPP+30% CaCO 3 and RPP+30% GF polymer composites. The impact strength of GPP polymer decreased with the addition of talc and CaCO 3 ; however the impact strength of GF increased. The impact strength (had 4.1 kj/m 2 ) for RPP polymer was obtained as 3.2 and 3.5 kj/m 2 , respectively in case of using talc and CaCO 3 additives, but these values were obtained as 8.8 kj/m 2 with the addition of GF additives. Similar results were attained by the previous worker Beg and Pickering (2008). Melt Flow Index (MFI) results are shown in Fig. 6. The MFI outcomes decreased with talc, CaCO 3 and glass fiber materials. The viscosity of RPP composites increased as a result of the high content of the additives. Same results were observed by several researchers Beg and Pickering (2008;Lu et al., 2006). Tasdemir et al. (2009) studied the LDPE and PP-wood fiber composites and stated that MFI abridged with the growth of wood flour substance. Son et al. (2004) testified that MFI condensed with the growing of paper sludge substance. Microstructural Characterization In Fig. 7, the SEM micrographs for RPP+30% talc, RPP+30% CaCO 3 and RPP+30% GF were obtained from the fracture surfaces of the tensile specimens. As a result of the quite weak bonding between RPP and additives, RPP recycled polymers were pulled out from the talc and CaCO 3 additives due to the high amount of talc and CaCO 3 additives. However, the mechanical properties were developed for RPP+30% GF polymers due to the both high amount of glass fibers and the better interface bonding between glass fiber and RPP matrix. It is seen from the SEM micrographs that the less glass fibers are pulled out and are broken. This situation shows strong bond interface evidence between glass fiber and RPP matrix. Conclusion In the present research, the following outcomes were obtained: • The tensile strength of the RPP polymer reduced by 23 and 31% for talc and CaCO 3 , respectively. Also, the tensile strength was obtained 68 MPa with the increment of 126% by the addition of 30% glass fiber into RPP • The yield strength of RPP polymer decreased to 15.3 and 26.9% with the addition of talc and CaCO 3 , respectively. In contrast, the yield strength of GF reinforced composites was higher because it established a bond with the PP matrix at the same contribution rate. The yield strength of RPP + 30% GF polymeric composite increased to 142% compared to the RPP polymer • The modulus of elasticity was increased with talc, CaCO 3 and glass fiber into the RPP polymers. The 1GPa elastic modulus of RPP polymer increased in 42 and 52 and reaching to the values of 1.5 and 1.6 GPa with the addition of talc and CaCO 3 respectively. By the accumulation of GF additives, the elastic modulus of RPP polymer increased in 300% and reaching to 4.2 GPa • The impact strength of RPP polymer decreased with the addition of talc and CaCO 3 ; however the impact strength of GF increased. The impact strength (had 4.1kj/m 2 ) for RPP polymer was obtained as 3.2 and 3.5 kj/m 2 , respectively in case of using talc and CaCO 3 additives, but these values were obtained as 8.8 kj/m 2 with the addition of GF additives • The MFI outcomes decreased with talc, CaCO 3 and glass fiber materials. Viscosity of RPP composites increased as a result of the high content of the additives • As a result of the quite weak bonding between RPP and additives, RPP recycled polymers were pulled out from the talc and CaCO 3 additives due to the high amount of talc and CaCO 3 additives. However, the mechanical properties were developed for RPP+30% GF polymers due to the both high amount of glass fibers and the better interface bonding between glass fiber and RPP matrix. Strong bond interface evidence was observed between glass fiber and RPP matrix Author's Contributions Uğur Soy: Literature search, checked tables and figures and interpretated the data. Fehim Fındık: SEM analysis, checked the gramatical errors, writing and revising of the manuscript. Salih Hakan Yetgin: Mechanical and thermal tests, writing and revising of the manuscript, corresponding author. Tolga Gökkurt: Supply material, Manufacturing of composites with extrusion and injection. Ferhat Yıldırım: Manufacturing of composites with extrusion and injection, DSC and TGA analysis. Ethics The present work is not published in its present form in any journal or will not be published in any journal.
2019-02-17T14:07:30.555Z
2017-08-15T00:00:00.000
{ "year": 2017, "sha1": "c9230391b8a9798c508defa5d4ffb7a5f257a8a6", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2017.878.885", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8f14c375f0b303f7579952a2dab9b3cd1b6c7ae6", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
6416599
pes2o/s2orc
v3-fos-license
Proinflammatory and Prothrombotic Effects of Hypoglycemia Hypoglycemia is known to be intrinsic to the treatment of diabetes because insulin is a powerful glucose-lowering agent and sulfonylureas exert their effect through insulin release by the pancreatic β-cells. Hypoglycemia occurs in association with these two common modes of therapy and was previously accepted as a part of the treatment of this condition. With the arrival of other modes of diabetes treatment, such as metformin, thiazolidenediones, α2-glucosidase inhibitors, and incretins, which do not induce hypoglycemia except when administered in combination with insulin and sulfonylureas, the issue of hypoglycemia has to be assessed in the context of both the immediate risk related to neuroglycopenia and the possible long-term risk of diabetic vascular complications. Vascular complications of hypoglycemia have to be tackled with greater urgency now because two recent trials of intensified diabetes treatment with insulin, the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial and Veteran's Affairs Diabetes Trial (VADT), did not demonstrate a reduction in cardiovascular events (1,2). In fact, the intensified insulin treatment arm of the ACCORD trial had to be halted because of an increase in overall mortality, despite a reduction in acute myocardial infarction. The rate of hypoglycemia in both trials was significantly increased with intensified insulin treatment. Although the analysis of the ACCORD data did not support the hypothesis that the increased mortality in the study was a result of hypoglycemia, the fact that hypoglycemia may often be asymptomatic leaves us with the possibility that it may be responsible. The fact that hypoglycemia results in platelet hyperaggregability (3) and an increase in several factors involved … H ypoglycemia is known to be intrinsic to the treatment of diabetes because insulin is a powerful glucoselowering agent and sulfonylureas exert their effect through insulin release by the pancreatic ␤-cells. Hypoglycemia occurs in association with these two common modes of therapy and was previously accepted as a part of the treatment of this condition. With the arrival of other modes of diabetes treatment, such as metformin, thiazolidenediones, ␣2glucosidase inhibitors, and incretins, which do not induce hypoglycemia except when administered in combination with insulin and sulfonylureas, the issue of hypoglycemia has to be assessed in the context of both the immediate risk related to neuroglycopenia and the possible long-term risk of diabetic vascular complications. Vascular complications of hypoglycemia have to be tackled with greater urgency now because two recent trials of intensified diabetes treatment with insulin, the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial and Veteran's Affairs Diabetes Trial (VADT), did not demonstrate a reduction in cardiovascular events (1,2). In fact, the intensified insulin treatment arm of the ACCORD trial had to be halted because of an increase in overall mortality, despite a reduction in acute myocardial infarction. The rate of hypoglycemia in both trials was significantly increased with intensified insulin treatment. Although the analysis of the ACCORD data did not support the hypothesis that the increased mortality in the study was a result of hypoglycemia, the fact that hypoglycemia may often be asymptomatic leaves us with the possibility that it may be responsible. The fact that hypoglycemia results in platelet hyperaggregability (3) and an increase in several factors involved in the coagulation cascade has been known for over 2 decades. Activated partial thromboplastin time is shortened, fibrinogen and factor VIII increase, and platelet counts fall in association with hypoglycemia (4). More recently, two studies have shown that hypoglycemia induces proinflammatory changes including an in-crease in the plasma concentration of interleukin (IL)-6 (5) and increases in other proinflammatory mediators, including leucocytosis, reactive oxygen species (ROS) generation, lipid peroxidation, and levels of tumor necrosis factor-␣ (TNF␣), IL-1␤, and IL-8 (5). Two studies published in this issue of Diabetes Care confirm that hypoglycemia does, indeed, induce an increase in proinflammatory mediators and platelet activation, and has an inhibitory effect on fibrinolytic mechanisms. Wright et al. (6) and Gogitidze Joy et al. (7) both used an insulin infusion to gradually induce hypoglycemia and then clamped glucose at hypoglycemic levels of 2.5 and 2.9 mmol/l, respectively. The former maintained hypoglycemia for 60 min while the latter maintained it for 120 min. As is evident from the data, the effects of the longer duration of hypoglycemia in the study by Gogitidze Joy et al. are more impressive as reflected in the increase in proinflammatory mediators, in spite of the fact that glucose concentrations were not as low as those in the study by Wright et al. The increases in the indexes of inflammation and oxidative stress in the study by Razavi Nematollahi et al. (5) were even more impressive, probably because the mode of induction of hypoglycemia was by a bolus intravenous injection, which led to a rapid fall in blood glucose concentrations leading to a rapid release of catecholamines and the stimulation of the inflammatory response. In the study by Wright et al., hypoglycemia induced an increase in CD40 expression on mononuclear cells and plasma concentration of CD40L, as well as an increase in platelet-monocyte aggregates and P-selectin concentrations with a trend toward an increase in von Willebrand factor concentrations. In the study by Gogitidze Joy et al., hypoglycemia led to an increase in intercellular adhesion molecule (ICAM), vascular cell adhesion molecule (VCAM), P-selectin, and Eselectin, as well as plasminogen activator inhibitor-1 (PAI-1), TNF␣, IL-6, and vascular endothelial growth factor (VEGF). Both of these studies included control arms in which the effect of insulin infusions administered at the same rates as above were investigated while maintaining glucose concentrations in the normal range through the appropriate titration of glucose infusion rates. Both studies confirmed the presence of an anti-inflammatory effect of insulin during infusions when euglycemia was maintained (8). Again, the anti-inflammatory effects of insulin were more impressive in the study by Gogitidze Joy et al. because they maintained the infusion of insulin for 120 min, whereas the study by Wright et al. infused insulin for only 60 min. Previous work has consistently shown impressive antiinflammatory effects of insulin infused for 120 min or more (8). Thus, in situations where insulin infusions are used for the anti-inflammatory and cardioprotective actions of insulin, extreme care has to be exercised because hypoglycemia reverses the effects of euglycemic hyperinsulinemia. It is of interest that hypoglycemia exerts proinflammatory effects similar to those of hyperglycemia and glucose intake (9,10). Clearly, hypoglycemia results in the induction of rapid inflammatory, platelet proaggregatory, antifibrinolytic, and prothrombotic responses. This effect of hypoglycemia overrides the anti-inflammatory, antiplatelet, and profibrinolytic effects of insulin observed under euglycemic conditions. In addition, there is also an increase in ROS generation and lipid peroxidation, reflecting oxidative stress. Although the hypoglycemic episodes are transient, repeated occurrences of such episodes may have cumulative effects that are detrimental to inflammation-based processes such as atherogenesis and its thrombotic complications. These detrimental effects would add to the previously demonstrated relationship between both silent and symptomatic hypoglycemia on cardiac angina. In one study involving diabetic patients with coronary heart disease who were continuously monitored for blood glucose concentrations and electrocardiographic changes, it was demonstrated that there was chest pain associated with hypoglycemia in 20% of the patients, of whom 40% had concomitant electrocardiogram (ECG) changes consistent with ischemia (11). Asymptomatic hypoglycemia was also associated with ECG changes of ischemia in 14%. In this study, it was also observed that a rapid fall in glucose of Ͼ100 mg/dl per hour was more likely to be associated with chest pain and ECG changes of ischemia. This likely vasoconstrictor effect of hypoglycemia on coronary circulation is also probably attributable to catecholamines. This article does not comment on the occurrence of dysrythmia; however, the sudden release of catecholamines is also conducive to the induction of abnormal cardiac rhythms. Collectively, therefore, hypoglycemia can trigger a sequence of events that may be extremely detrimental from the cardiac point of view. It is important to note that the rapidity of the onset of hypoglycemia is also a major determinant of the proinflammatory changes. Further studies are necessary to increase our understanding of the pathophysiology of hypoglycemia and its relationship with inflammation, platelet aggregation, and thrombotic mechanisms. In addition, we also need further information on the effects of the severity, duration, and rapidity of hypoglycemia development. Such studies are even more important and urgent because insulin is now being used in the intensive care setting in very ill patients with profound and severe inflammation in association with the activation of thrombotic mechanisms. To add to these challenges, there are also the contrasting effects of spontaneous hypoglycemia, which carries a bad prognosis, whereas that induced by insulin does not, in patients with acute myocardial infarction (12,13). Another important aspect of the pathophysiology of hypoglycemia is its effect on the brain. In addition to the obvious short-and long-term effects of neuroglycopenia, the brain is also vulnerable to systemic inflammation because cytokines from the circulation can potentially enter the brain, activate the microglia (the representatives of macrophages in the central nervous system), and activate a damaging, neurotoxic sequence of events (14). Whether hypoglycemiainduced catecholamine release also exerts a proinflammatory effect on microglia requires further investigation (15). It should be noted that repeated hypoglycemia and hypoglycemia-related deaths are associated with significant changes in the hippocampus and the dentate gyrus (16,17 Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. See http:// creativecommons.org/licenses/by-nc-nd/3.0/ for details.
2014-10-01T00:00:00.000Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "bacd7a18d49ab6f02631bf2d690f3b30dbbad832", "oa_license": "CCBYNCND", "oa_url": "http://care.diabetesjournals.org/content/33/7/1686.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bacd7a18d49ab6f02631bf2d690f3b30dbbad832", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219568425
pes2o/s2orc
v3-fos-license
Who’s a Good Boy? Effects of Dog and Owner Body Weight on Veterinarian Perceptions and Treatment Recommendations Background: Weight bias against persons with obesity impairs health care delivery and utilization and contributes to poorer health outcomes. Despite rising rates of pet obesity (including among dogs), the potential for weight bias in veterinary settings has not been examined. Subjects/Methods: In two online, 2×2 experimental studies, the effects of dog and owner body weight on perceptions and treatment recommendations were investigated in 205 practicing veterinarians (Study 1) and 103 veterinary students (Study 2). In both studies, participants were randomly assigned to view one of four vignettes of a dog and owners with varying weight statuses (lean vs. obesity). Dependent measures included emotion/liking ratings toward the dog and owners; perceived causes of the dog’s weight; and treatment recommendations and compliance expectations. Other clinical practices, such as terms to describe excess weight in dogs, were also assessed. Results: Veterinarians and students both reported feeling more blame, frustration, and disgust toward dogs with obesity and their owners than toward lean dogs and their owners (p values<0.001). Interactions between dog and owner body weight emerged for perceived causes of obesity, such that owners with obesity were perceived as causing the dog with obesity’s weight, while lean owners were perceived as causing the lean dog’s weight. Participants were pessimistic about treatment compliance from owners of the dog with obesity, and weight loss treatment was recommended for the dog with obesity when presenting with a medical condition ambiguous in its relationship to weight. Veterinarians and students also reported use of stigmatizing terms to describe excess weight in dogs. Conclusions: Findings from this investigation, with replication, have implications for training and practice guidelines in veterinary medicine. Introduction Increases in obesity and its associated health comorbidities have necessitated enhanced training for health professionals to meet patients' weight management needs (1). Prior studies have demonstrated that healthcare practitioners report negative attitudes toward patients with obesity and endorse factors such as willpower and "personal responsibility" as causes of obesity more strongly than they endorse biological or environmental factors (2). Obesity may also affect the number of tests ordered by physicians, and physicians attribute health problems to patients' weight even when they are ostensibly unrelated (3,4). In addition, patients who perceive being judged by their health professionals report avoiding preventive and follow-up care services (4). To address weight bias in health care, and consequently enhance treatment utilization and patient-centered care, several professional societies have undertaken initiatives to promote awareness of weight bias among practitioners and to prevent the stigmatization of patients with obesity (1,5,6). For example, increased attention to the terms used to describe patients' weight and the use of "peoplefirst" language have been promoted to reduce weight stigma (7,8). Calls for more education about the etiology and treatment of obesity have also come from multiple health professions (9). Surprisingly, the topic of weight bias has not been investigated among veterinarians. Estimates suggest that overweight/obesity affects over half of pets, including up to 60% of dogs (10)(11)(12). Similar to humans, dogs with obesity are at heightened risk for metabolic and osteoarthritic diseases (10). Dogs with obesity may also be subject to weight stigmatization. In addition, their owners may experience "courtesy stigma," in which they are viewed negatively and blamed for their dog's weight, as has been observed in attitudes toward parents of children with obesity (13,14). Given prior evidence that perceived weight stigma among humans in medical settings interferes with patient care (4), investigating the potential presence of weight bias in veterinary interactions may have important implications for pet health care. The present research represents a first effort to explore potential weight bias in veterinarians. Specifically, the current two studies investigated how dog and owner body weight may impact veterinarians' emotional responses to dogs and owners, perceived causes of and blame for dogs' weight, and treatment recommendations and compliance expectations. Veterinarians were predicted to respond more positively to lean dogs and owners than to those with obesity, with particularly negative responses when both the owners and dog had obesity. Exploratory analyses assessed other aspects of clinical practice, such as the terms used by veterinarians to describe dogs with obesity. These aims were tested in an experimental, online study of practicing veterinarians and in an additional replication study of veterinary students. Study 1 Methods Participants.-Alumni from a single veterinary school were recruited by email to participate in a 10-minute voluntary, anonymous online survey. The recruitment target was 120 participants (n=30 per condition, described below), based on anticipated medium to large effect sizes (15)(16)(17) Emails were sent over the span of a few weeks. Of the 290 alumni who entered the survey, 205 veterinarians completed at least the first block of items and were included in the analyses. Table 1 presents participants' demographic characteristics. Procedures.-The first page of the survey was the informed consent form, and participants had to click a box to provide consent before proceeding to the survey. Participants were then asked to indicate if they were a practicing veterinarian. Eligible participants were randomized via Qualtrics software to view one of four potential images featuring: a lean dog and lean owners (one male and one female owner); a lean dog and owners with obesity; a dog with obesity and lean owners; or a dog and owners with obesity. Owner images featured cartoon men and women with no faces and generic clothing. These images were identical except for weight status. The dog images were from a veterinary guide for assessing weight status and pictured the dog from the side and above. Dog and owner images were combined using Photoshop CC, with the owners standing behind the dog. Participants completed all measures and received a debriefing statement upon completion. No compensation was given for participation. This study was granted exemption by the institutional review board and was preregistered in the Open Science Framework. Measures.-As a manipulation check, participants rated the weight status of the dog and owners (1=very underweight to 7=very overweight) (18). Positive regard toward dogs and owners was assessed by asking participants to rate (1-7) the extent to which they felt the following emotions in response to the dog and owners, respectively: affection; blame; compassion; frustration; disgust; respect; and contempt. These emotion ratings have been used in prior studies of weight bias (and general bias) in humans (19,20). Participants also rated how much they liked the dog and owners (1-7). To assess perceived causes of the dog's body weight, an 18-item scale (items rated 1-7) was adapted, in consultation with a veterinarian, from prior weight bias scales used to measure the extent to which weight in humans is attributed to biology/genetics, behaviors, personal responsibility (e.g., motivation), and the environment (21). Participants also rated (1-7) the extent to which they believed, overall, the owners' weight affected the dog's weight and vice versa. Participants were asked to endorse (yes/no) up to 7 treatment recommendations related to the dog's weight (weight loss, reduce portion sizes, reduce treats, increase physical activity, medication for weight, follow-up visit in 2-3 weeks, or no weight loss recommendations). Treatment options also included a recommendation for a prescribed maximum number of treats per day (scored continuously). Participants rated how likely they thought owners were to comply with their treatment recommendations and how much they would want to continue to treat the dog (ratings 1-7). Participants were then informed, in a hypothetical vignette, that the dog they viewed was presenting at their clinic with respiratory problems. Participants were asked to endorse (yes/no) four potential diagnostic procedures (physical exam, scoping, chest film, or no diagnostic action). Participants were then told that the dog was diagnosed with a collapsed trachea and asked to endorse five potential treatment options (weight loss, stents, medication, surgery, or no treatment). Participants were also provided a list of potential terms to label a dog with excess weight and asked to indicate whether they had ever used any of the terms with clients and/or colleagues (and could write in additional terms used). Finally, participants were asked whether (and how frequently) they had recommended that owners seek weight loss counseling for themselves and whether or not they would recommend obesity to be labeled as a disease in dogs. Demographic characteristics included participant age, race/ethnicity, sex and gender, selfreported height, weight, and weight status (rated 1 [very underweight] to 7 [very overweight]), and how long ago participants graduated from veterinary school. Analytic plan.-Factor analysis with varimax rotation and eigenvalue cutoff of 1 was conducted to group causes of dog's weight into causal categories with item loadings>0.5. Item ratings within each grouping were averaged to create causal category scores. Data were checked to meet assumptions of normality, and dependent variables were transformed as needed. Analyses of variance (ANOVAs) were used to test the main and interacting effects of dog and owner body weight on emotions/liking toward the dog and owners and perceived causes of the dog's weight. Logistic regression was used to identify effects of dog and owner body weight on treatment recommendations. Family-wise Bonferroni-type corrections were used to conservatively account for the large number of comparisons by dividing 0.05 by the number of comparisons per family of outcome measure (emotions/liking toward dogs, emotions/liking toward owners, perceived causes of dog's weight, and treatment recommendations/expectations). Descriptive statistics were computed for terms used to describe excess weight in dogs, counseling of owners to seek weight management, and support for obesity to be labeled as a disease. Results Manipulation check.-Participants rated the dog with obesity as having a significantly higher weight status than the lean dog (6.4 vs. 4.1 on 1-7 scale, p<0.001). Similarly, owners with obesity were rated as having a significantly higher weight status than lean owners (6.1 vs. 4.0, p<0.001). No significant interaction of dog and owners' weight was found. Emotions and liking.-To account for the total comparisons made across emotion/liking ratings of dogs and of owners, respectively, p≤0.002 was used as the cutoff for significance for each family of comparisons. 1 Figures 1a and 1b present mean emotion and liking ratings toward dogs and owners. Comparison statistics can be found in Supplementary Table S1. Main effects of dog (but not owner) weight emerged for all emotion ratings toward dogs, with the exception of affection (p=0.01) and respect (p=0.42). The dog with obesity, compared to the lean dog, elicited stronger feelings of blame, frustration, disgust, and contempt toward the dog, as well as greater ratings of compassion (all p values≤0.001; Figure 1a). Main effects of owners' weight and interaction terms of dog and owners' weight were not significant for any emotion ratings toward the dog. No significant effects of dog or owners' body weight were found for liking of the dog. Similar effects were found for emotion ratings toward owners (Figure 1b). The dog with obesity elicited greater blame, frustration, disgust, and contempt toward the owners than did the lean dog (ps<0.001). Veterinarians reported more disgust toward owners with obesity than toward lean owners, although ratings were low overall. Main effects were not significant for liking of the owners, and no interaction effects of dog and owner weight were significant. Causes of weight.-Factor analysis results produced five causal categories: biology (breed, age, genetics); dog behavior (treats, portions, physical activity); owner behavior (owners' relationship to food, health habits); owner responsibility (owners' level of responsibility, commitment, motivation); and environment (neighborhood safety, access to outdoor space, children in the home, other animals in the home, dog history of food deprivation) (items for owner nutrition knowledge and finances did not load onto any factor and were excluded). To account for the total comparisons across causal attribution categories and ratings of dog/owner weight affecting one another, statistical significance was set at p≤0.002. Mean ratings of causal factors are displayed in Table S2. Significant interactions emerged for biology and owner behavior (see Figures 2a and 2b for simple slopes). When the owners had obesity, participants attributed the dog with obesity's weight less to biology and more to owner behavior than they did for the lean dog's weight. No effects of dog body weight were found when the owners were lean (i.e., dogs with and without obesity were perceived to have equivalent causes of weight when the owner was lean). A significant interaction of dog and owner weight also emerged for perceptions of the degree to which, in general, the owners' weight affected the dog's weight. Participants perceived the lean owners' weight as having less of an effect on the dog with obesity's weight than on the lean dog's weight. Conversely, participants reported that the owners with obesity's weight had more of an effect on the dog with obesity's weight than on the lean dog's weight (Figure 2c). 19, p<0.001). No effects of dog or owner body weight were found for desire to continue to treat the dog or prescribed number of treats per day. Similarly, no differences emerged in diagnostic recommendations for a dog presenting with respiratory problems. When the dog was diagnosed with a collapsed trachea, participants were more likely to recommend weight loss treatment for the dog with obesity versus the lean dog (OR=190.8, CI=23.4-1556.2, p<0.001). Other clinical practices.- Table 2 lists terms endorsed by veterinarians to describe dogs with excess weight to clients and/or colleagues. "Overweight" and "obese" were the most commonly endorsed terms (80-96%), followed by "heavy," "fat," and "chunky" (>60% each). Approximately 28% of veterinarians endorsed using terms such as "tick" and "coffee table," and an additional 10.7% wrote in some variant of the term "ottoman." Approximately 10% of veterinarians reported that they had counseled owners to seek weight loss treatment from a human health professional (8.3% reported they did this rarely, 2% sometimes or very often). The vast majority of veterinarians (76.1%) recommended that obesity be labeled as a disease in dogs. Discussion Practicing veterinarians endorsed more negative emotional responses toward dogs and owners when the dog had obesity versus was lean. Notably, ratings for some of these emotions were low overall. Veterinarians also reported feeling more compassion toward the dog with obesity and reported respect for dogs and owners regardless of body weight. Still, the observed differences in negative emotional responses by dog body weight provide the first known empirical evidence of potential weight bias among veterinarians. Owners' weight interacted with dog weight to shape veterinarians' beliefs about the causes of the dog's weight. Lean owners received "credit" for keeping their dogs lean but were seen as less responsible for the dog with obesity's weight, while owners with obesity were viewed as responsible for their dog's obesity but not leanness. Similarly, the dog with obesity's weight was rated as less biologically-based than was the lean dog's weight when the owners had obesity, but the perceived biological basis of the dog's weight did not differ when the owners were lean. These observed differences provide further evidence of how dog and owner body weight may bias veterinarians' assessment of their clients. As expected, veterinarians were more likely to provide weight-related treatment recommendations for the dog with obesity than for the lean dog. Owners' weight did not appear to affect these recommendations. Diagnostic recommendations for a dog with respiratory problems also did not differ by dog or owner body weight, although when the dog was described as having a collapsed trachea, weight loss was more likely to be recommended for the dog with obesity versus the lean dog. Weight can affect respiratory health in dogs, suggesting that this may be an appropriate recommendation, as long as other potential causal factors are also considered and not dismissed or ignored (22). In addition, owners of the dog with obesity were rated as less likely to comply with weight-related treatment recommendations than owners of the lean dog. Veterinarians reported use of terms Pearl et al. Page 6 to describe dogs with excess weight such as "fat," "tick," "coffee table," and "ottoman." The term "fat" is typically perceived as stigmatizing when used to describe humans (23)(24)(25)(26), although it is unknown how this and other terms are perceived when used to describe dogs. Study 2 examined the effects of dog and owner body weight on perceptions among veterinary students. This served as a replication, as well as an investigation of whether weight bias may emerge early in medical training, as has been observed in other preservice health trainees (27,28). Study 2 Methods Veterinary students from the same institution as Study 1 were recruited by email over several weeks, with a goal of recruiting 120 participants. Of the 147 students who entered the survey, 103 completed at least the first block of items and were included in the analyses. Table 1 presents participant demographic characteristics. Participants reported their current year in veterinary school. All other procedures were identical to those described in Study 1. Results Manipulation check.-As with veterinarians, students rated the dog with obesity as having a significantly higher weight status than the lean dog (6.5 vs. 4.1, p<0.001), and the owners with obesity as having a higher weight status than the lean owners (6.2 vs. 4.2, p<0.001). Interaction effects of dog and owner weight on perceived weight status were not significant. Emotions and liking.-Consistent with Study 1, students reported greater blame, frustration, and disgust toward the dog with obesity compared to the lean dog ( Figure 3a, Table S3a). The dog with obesity also elicited significantly higher ratings of blame, frustration, disgust, and contempt toward owners than did the lean dog ( Figure 3b, Table S3b). In addition, participants reported liking the owners less if the dog had obesity. Causes of weight.-Ratings for "owner responsibility" as a cause of dog weight were lower when the dog had obesity versus was lean (p<0.001; Table S4). In addition, dog and owner weight significantly interacted for ratings of owner behavior as a cause of dog weight (Figure 4a). When the owners had obesity, the dog with obesity's weight was attributed more to owners' behavior than was the lean dog's weight (p<0.001). Conversely, when the owners were lean, the lean dog's weight was rated as more attributable to owners' behavior than was the dog with obesity's weight (p=0.002). A significant interaction between dog and owners' weight was also found for ratings of the effects of owners' weight on the dog's weight (Figure 4b). When the owners had obesity, the owners' weight was rated as having significantly more effect on the dog with obesity's weight than on the lean dog's weight, but no such effects emerged when the owners were lean. Other clinical practices.-"Overweight" was the most common term used to describe a dog with excess weight, followed by "chunky," then "obese." Over half of students endorsed using the terms "fat" or "chubby" ( Table 2). Only 5-10% reported using the terms "tick" or "coffee table." Approximately seven percent of students reported counseling owners to speak with a healthcare professional about managing their own weight (2.9% reported they did this rarely, 1.9% sometimes, and 1.9% very often). The majority of students (79.6%) recommended that obesity should be labeled a disease in dogs. Discussion Study 2 replicated among veterinary students several of the findings from Study 1. Consistent with Study 1 results, students reported more negative emotions toward the dog and owners when the dog had obesity. Also similar to Study 1, dog and owner body weight interacted in their effects on perceived causes of dog weight. Students attributed the dog's weight more to owners' behavior, and to the overall influence of the owners' weight, when the owners and dog both had obesity. Study 1's interaction effects for biological attributions of the dog's weight did not replicate, and students recommended fewer weight loss behaviors for the dog with obesity than did veterinarians. The use of stigmatizing weightrelated terms was also slightly lower among students than veterinarians. However, the pessimism about weight-related treatment compliance for the dog with obesity and recommendation of weight loss for a dog with obesity and a collapsed trachea that were observed among veterinarians also appeared among students. General Discussion This is the first study to investigate weight bias among practicing veterinarians and students. Across studies, veterinarians and veterinary students reported more negative emotional responses -including disgust, frustration, blame, and contempt -toward dogs and owners when the dog had obesity versus was lean. As noted, the ratings for some of these emotions (e.g., contempt) were low across conditions and did not rise to a level of frank stigmatization. Students reported that they liked the owners less if their dog had obesity, and both veterinarians and students reported pessimism about the owners' likelihood of complying with weight-related treatment recommendations for the dog with obesity. In human health care, negative automatic emotional responses to patients based on specific characteristics (e.g., race, weight) contribute to implicit bias and differential treatment of patients with stigmatized identities (29). Further research is needed to determine the potential impact of weight bias on veterinarians' interactions with pet owners and clinical decision making. When the owners had obesity, students and veterinarians both perceived the owners' personal relationship to food and health habits as causing the dog with obesity's weight. They also generally viewed the owners with obesity's weight as having a stronger effect on the dog with obesity's weight. Thus, participants made the common assumption that individuals with obesity had poorer eating habits and health behaviors than lean individuals. Seven to ten percent of students and veterinarians reported that they had counseled a pet owner to seek weight management from a human health professional. It is surprising that any participants reported counseling owners about weight, considering that veterinarians are not trained to give health advice to humans. Greater attention is due to whether or not some veterinarians are commenting on owners' weight or health habits and, as a result, potentially stigmatizing owners with obesity. Related to this point, more than half of veterinarians and students endorsed use of the term "fat" to describe excess weight in dogs, and up to 28% used the terms "coffee table," "ottoman," or "tick." If these findings were to be replicated in another sample, veterinary organizations may benefit from considering standards for training and practice put forth by human health care organizations for use of non-stigmatizing and patient-centered language in obesity care (7,8). Future studies could investigate the effects on provider-client communication and treatment utilization of respectful and "pet-first" language related to weight among owners of pets with obesity. Veterinarians viewed the dog with obesity's weight as less biologically-based when the owners had obesity. In addition, over three quarters of veterinarians and students supported labeling obesity in dogs as a disease. It is possible that framing obesity as a medical condition may reduce stigma, in part by increasing biological attributions for weight and thus reducing blame (30)(31)(32). However, the effects on stigma and treatment outcomes of biological attributions for obesity in humans are largely mixed (21,33,34). As causal attributions and disease labeling continue to be examined in humans, veterinarians may also continue to consider how this debate pertains to trainees and practice guidelines in their field. In a clinical scenario in which a dog presented with a collapsed trachea, veterinarians and students were more likely to recommend weight loss to the dog with obesity than the lean dog. Weight is one of many factors than can cause respiratory problems in dogs (22), and practitioners who focus on weight loss in their recommendations may miss other potential health issues that require treatment (4). Future studies could potentially test for veterinarians' ability to accurately diagnose obesity in dogs and identify its known comorbidities. Owner body weight did not appear to affect treatment recommendations in this study. Studies that assess treatment recommendations in simulated clinical scenarios with standardized patients could further elucidate the potential effects of body weight on treatment recommendation for conditions that may not be entirely related to obesity. Strengths of the current studies included the novel investigation of weight bias in a previously neglected population of health professionals, replication in two different veterinary samples, and use of a randomized, experimental design to assess causal effects. The studies were limited by sampling from a single institution, use of hypothetical dog/ owner drawings and vignettes, reliance on self-report measures, and a sole focus on perceptions of dogs (versus other types of pets, such as cats). Due to the high number of comparisons (resulting in a conservative statistical adjustment) and a relatively small sample size (particularly for students), the analyses may have been limited in their power to detect some statistically-significant findings. Further replication in a larger, multi-site study is needed to verify results from this preliminary investigation of weight bias among veterinarians. Supplementary Material Refer to Web version on PubMed Central for supplementary material.
2020-06-11T09:03:14.337Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "ccf9725b8f7d0d619a0c31b075feacbd6c492640", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7686094", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "441c1d24af4eb882590e48dec90894569fb17d1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52872332
pes2o/s2orc
v3-fos-license
Genomes or exomes: evaluation of cost, time and coverage The field of human genetics is being reshaped by exome and genome sequencing. Several lessons are evident from observing the rapid development of this area over the past 2 years, and these may be instructive with respect to what we should expect from 'next-generation human genetics' in the next few years. Cancer is driven by mutation. Using massively parallel sequencing technology, we can now sequence the entire genome of cancer samples, allowing the generation of comprehensive catalogs of somatic mutations of all classes. Bespoke algorithms have been developed to identify somatically acquired point mutations, copy number changes and genomic rearrangements, which require extensive validation by confi rmatory testing. The fi ndings from our fi rst handful of genomes illustrate the potential for next-generation sequencing to provide unprecedented insight into mutational processes, cellular repair pathways and gene networks associated with cancer development. I will also review the possible applications of these technologies in a diagnostic and clinical setting and the potential routes for translation. Massively parallel sequencing is transforming our knowledge of cancer, yet the medical value of next-generation approaches has not been fully established. From a technical perspective, it is easy to envisage that, within a few years, the primary diagnostic approach for all cancers will be to assess a partial or whole cancer genome sequence; however, the adoption of this approach will ultimately depend on the development of robust and valid models for the tailoring of therapy. Thus, within a short period, the focus of genomic investigation will shift from the current emphasis on discovery in poorly annotated datasets, such as The Cancer Genome Atlas, to ambitious investigations that focus on precise clinical questions. This transition will occur in two stages. The fi rst stage will be a retrospective, 'genome-backward' approach, in which patients are treated blind to genomics but consent to prospective germline and tumor sequencing, as well as data sharing. In this way, models that use mutation patterns to predict treatment outcomes can be developed. In a later prospective, 'genome-forward' phase, therapeutic postulates that arise from genome sequencing will be used as the basis for clinical trial eligibility or stratifi cation. Specifi c examples of how these approaches are being studied in breast cancer will be discussed. Recent studies have indicated that humans have an exceptionally high per-generation mutation rate of 7.6 × 10 -9 to 2.2 × 10 -8 . These spontaneous germline mutations can have serious phenotypic consequences when aff ecting functionally relevant bases in the genome. In fact, their occurrence may explain why cognitive disorders with a severely reduced fecundity, such as mental retardation, remain frequent in the human population, especially when the mutational target is large and comprises many genes. This would explain a major paradox in the evolutionary genetic theory of these disorders. In this presentation, I will describe our recent work on using a family-based exome sequencing approach to test this de novo mutation hypothesis in ten patients with unexplained mental retardation [1]. Unique nonsynonymous de novo mutations were identifi ed and validated in nine genes. Six of these, identifi ed in diff erent patients, were likely to be pathogenic based on gene function, evolutionary conservation and mutation impact. The clinical relevance of these novel genes, and the ultimate proof that they cause disease, lies in the identifi cation of de novo mutations in additional patients with a similar phenotype. As such, we are currently screening approximately 1,200 patients with unexplained mental retardation for mutations in YY1, which is one of these newly identifi ed genes. In addition, we are extending our family-based exome sequencing approach to 100 patients to establish the diagnostic yield for de novo mutations in patients with unexplained mental retardation. These fi ndings, when replicated, provided strong experimental support for a de novo paradigm for mental retardation. Together with de Diseases of the vaginal tract result from perturbations of the complex interactions among microbes of the host vaginal ecosystem. Recent advances in our understanding of these complex interactions have been enabled by next-generation-sequencing-based approaches, which make it possible to study the vaginal microbiome. In harnessing these approaches, we are beginning to defi ne what constitutes an imbalance of the vaginal microbiome and how such imbalances, along with associated host factors, lead to infection and disease states such as bacterial vaginosis (BV), preterm births, and susceptibility to HIV and other sexually acquired infections. We have exploited various approaches to this end: comparative analysis of reference microbial genomes of vaginal isolates; comparative microbiome, metabolome and metagenome analysis of vaginal communities from subjects deemed to be healthy and individuals with BV; and comparative microbiome analysis of vaginal communities from humans and non-human primate species. The results from comparative genome sequencing have led us to suggest that diff erent strains of the proposed pathogen Gardnerella vaginalis have diff erent virulence potentials and that the detection of G. vaginalis in the vaginal tract is not indicative of a disease state [1]. Comparative microbiome, metabolome and metagenome analysis of vaginal communities from humans has demonstrated that the microbial communities from subjects with BV have a defi ned bacterial composition and metabolic profi le that is distinct from subjects who do not have BV [2 and unpublished observations]. Our studies of microbial communities from non-human primate species and humans provide a unique comparative context. From an evolutionary perspective, humans and non-human primates diff er considerably in mating habits, estrus cycles and gestation period. Moreover, birth is diffi cult in humans relative to other primates, increasing the risks of maternal injury and infection. In light of these numerous diff erences between humans and non-human primates, we hypothesize that humans have microbial populations that are distinct from those of non-human primates. Preliminary results show that the vaginal microbiomes of non-human primates are more diverse and are compositionally distinct from human vaginal microbiomes [3,4]. The composition of bacterial genera found in non-human primates is dissimilar to that seen in humans, most notably with lactobacilli being much less abundant in non-human primates. Our observations point to vaginal microbial communities being an important component of an evolutionary set of adaptations that separates humans from other primates and is of fundamental importance to health and reproductive function. For more than a decade, the Joint Center for Structural Genomics (JCSG) [1] has been at the forefront of developing tools and methodologies that allow the application of high-throughput structural biology to a broad range of biological and biomedical investigations. In the previous phases of the National Institutes of Health's Protein Structure Initiative (PSI; 2000 to 2010) [2], we explored structural coverage of uncharted regions of the protein universe [3], as well as a single organism, allowing complete structural reconstruction of the metabolic network of Thermotoga maritima [4]. In the current phase (PSI:Biology; 2010 to 2015), the JCSG is leveraging its high-throughput platform to explore the structural basis for host-microbe interactions in the human microbiome. The emerging fi eld of metagenomics has been particularly enlightening: the human gut microbiome sequencing projects have already uncovered fascinating new families and expansions of known families for adaptation to this environment. The gut microbiota is dominated by poorly characterized bacterial phyla, which contain an unusually high number of uncharacterized proteins that are largely unstudied. Their infl uence upon human development, physiology, immunity and nutrition is only starting to surface and is thus an exciting new frontier for structural genomics, where we can structurally investigate the contributions of these microorganisms to human health and disease. The JCSG is located Next-generation sequencing of RNA (RNA-Seq) is a powerful tool that can be applied to a wide range of biological questions. RNA-Seq provides insight at multiple levels into the transcription of the genome. It yields sequence, splicing and expression-level information, allowing the identifi cation of novel transcripts and sequence alterations. We have been developing and comparing methods for samples that present a challenge: that is, those with low quantity and/or quality RNA. RNA-Seq methods that start from total RNA and do not require the oligo(dT) purifi cation of mRNA will be valuable for such challenging samples. Such methods use alternative approaches to reduce the fraction of sequencing reads derived from rRNA. We will present results from multiple approaches, including the use of not-so-random (NSR) primers for cDNA synthesis, low-C 0 t hybridization with a duplex-specifi c nuclease for light normalization and NuGEN's Ovation RNA-Seq kit. We demonstrated that these three methods successfully reduce the fraction of rRNA to less than 13%, even when starting from degraded RNA. We compared the performance between these methods and with 'gold standard' RNA-Seq data (derived from samples with large quantities of high-quality RNA), using quantitative criteria that evaluate eff ectiveness for genome annotation, transcript discovery and expression profi ling. The application of these methods to samples that contain degraded RNA and/or very low input amounts of RNA will also be presented. Viral diversity in children with diarrhea in Gambia Irina Astrovskaya 1 , Bo Liu 1 and Mihai Pop 1 Results We were able to detect and assemble sequences from known diarrhea-causing viruses (such as rotaviruses, adenoviruses and noroviruses), known human viruses (such as herpesviruses and enteroviruses) and potential diarrhea-causing viruses (such as bocaviruses, astroviruses and parechoviruses). These fi ndings were consistent with independent virology results. In some clinical cases, sequences from classic viruses were found, but the virology results were negative. COSMIC provides a large number of graphical and tabular views for interpreting and mining the large quantity of information, as well as the facility to export the relevant data in various formats. The website can be navigated in many ways to examine mutation patterns on the basis of genes, samples and phenotypes, which are the main entry points to COSMIC. COSMIC also provides various options to browse the data in a genomic context. Integration with the Ensembl genome browser allows the visualization of full genome annotations, together with COSMIC data, on the GRCh37 genome coordinates. COSMIC also contains its own genome browser, which facilitates data analysis by combining genome-wide gene structures and sequences with rearrangement breakpoints, copy number variations and all somatic substitutions, deletions, insertions and complex gene mutations. The main COSMIC website [1] encompasses all of the available data. However, within COSMIC, the Cancer Cell Line Project [3] is a specialized component, which provides details of the genotyping of almost 800 commonly used cancer cell lines, through the set of known cancer genes. Its focus is to identify driver mutations, or those likely to be implicated in the oncogenesis of each tumor. This information forms the basis for integrating COSMIC with the Genomics of Drug Sensitivity in Cancer project [4], which is a joint eff ort with the Massachusetts General Hospital [5] to screen this panel of cancer cell lines against potential anticancer therapeutic compounds to investigate correlations between somatic mutations and drug sensitivity. Data on somatic mutations in cancer are being produced at a rapidly increasing rate, and the combined analysis of large distributed datasets is becoming ever more diffi cult. However, COSMIC curates and standardizes this information in a single database, providing user-friendly browsing tools and analytical functions, thus ensuring its role as a key resource in human cancer genetics. Background Recent genome-wide association studies (GWAS) have identifi ed allele T of a single nucleotide polymorphism (SNP), rs2294008, in the prostate stem cell antigen (PSCA) gene as a risk factor for bladder cancer [1,2]. In the present study, we aimed to fi nd additional disease-associated SNPs in the PSCA region and to explore their possible molecular function. Methods Based on information from the 1000 Genomes and HapMap 3 projects, we performed imputation analysis on 3,532 bladder cancer cases and 5,120 healthy controls of European ancestry from the stage 1 bladder cancer GWAS, within ±100 kb of the region fl anking the GWAS signal, rs2294008. The average allele dosage and best-guess genotypes were estimated and tested for association between SNP variants and bladder cancer risk by using unconditional logistic regression. Functional follow-up studies included RNA sequencing in normal and tumor bladder samples and electrophoretic mobility shift assays to examine the potentially altered DNAprotein interactions for SNPs of interest. Results A total of 639 imputed and 37 genotyped SNPs within ±100 kb of the region of the original GWAS signal were tested for genetic association with bladder cancer. In these stage 1 GWAS samples, the SNP rs2294008 had a per-allele odds ratio (OR) of 1.09 (95% confi dence interval (CI) = 1.02 to 1.16, P = 6.93 10 −4 ). Multivariable logistic regression analysis adjusted for the study center, age, gender, smoking status and rs2294008 genotype revealed a novel associated variant, rs2978974 (OR = 1.11, 95% CI = 1.04 to 1.19, P = 1.62 × 10 −3 ). There was low linkage disequilibrium between rs2978974 and the original GWAS signal, rs2294008 (D' = 0.19, r 2 = 0.02). Only individuals carrying the risk variant of both SNPs had an increased risk of bladder cancer (OR = 1.24, 95% CI = 1.13 to 1.35, P = 4.69 × 10 −6 ) and not individuals who carried a risk variant of only one of the SNPs (P > 0.05). Stratifi ed analysis suggested that this compound eff ect of rs2294008 and rs2978974 was more signifi cant in males (OR = 1.27, P = 2.80 × 10 −6 ) than in females (OR = 1.08, P = 0.52). rs2978974 resides 10 kb upstream of rs2294008, is marked by an H3K4me3 signal and is in the vicinity of an androgen-receptor-binding site. Using RNA sequencing of bladder samples, we showed that rs2978974 is located within an alternative, untranslated fi rst exon of PSCA. Using the electrophoretic mobility shift assay with nuclear proteins from LNCaP and HeLa cells, we observed that the non-risk-associated allele (G) of rs2978974, but not the risk allele (A), could bind to ELK1, a protein belonging to the ETS family of transcription factors. Conclusions We identifi ed a SNP, rs2978974, in the PSCA region as a novel marker for bladder cancer susceptibility. There was a compound eff ect in carriers of both the rs2294008 and rs2978974 risk variants. The functional relevance of rs2978974 might be related to the loss of ELK1 regulation by the risk allele (A) and diff erential regulation of PSCA mRNA expression. Background Dinofl agellates are a diverse group of ecologically important eukaryotic algae, the global impact of which ranges from the large-scale primary production of oxygen [1] to devastating toxic algal blooms [2]. These organisms have exceptionally large genomes (10 9 to 10 11 bases) [3] and highly duplicated genes (which can occur thousands of times within a single genome) [4]. These and other unusual characteristics have made dinofl agellates diffi cult to study using traditional molecular biology techniques. Sequence data for dinofl agellates are correspondingly sparse, and not a single genome sequence has been published to date. As part of our project called Assembling the Dinofl agellate Tree of Life (DAToL), our laboratory has sequenced the transcriptome of Polarella glacialis. Its genome is estimated to be only 3 Gb in size, making it one of the smallest known dinofl agellate genomes. Because we had to rely on de novo assemblers that had been tested using data from organisms that are extremely divergent from dinofl agellates, we took special care in our attempts to validate the data. Before expanding our analyses to include additional dinofl agellates, we compared the results from diff erent sequencing and assembly methods. Methods Total RNA was extracted from cultured P. glacialis. This sample was then divided and shipped to Macrogen for rRNA degradation, library preparation and sequencing. One library was sequenced on one-eighth of a Roche/454 GS FLX picotiter plate using Titanium chemistry. A second library was sequenced using one lane on an Illumina GAIIx sequencer for 78 cycles in both directions (paired end). The sequences were assembled using Newbler, MIRA, Oases and Trinity, and they were analyzed using various custom scripts. Results The total amount of unassembled 454 sequence data added to less than one-third of the combined lengths of only those Trinity transcripts that had a signifi cant BLAST hit against a sequence in GenBank, indicating that we did not achieve complete coverage with our 454 data. Conclusions Our primary hypothesis was that the longer read lengths of the 454 data might allow the corresponding assemblers to better resolve repetitive sequences, which could be instrumental for assembling conserved regions within highly duplicated genes. Our failure to obtain complete coverage with the 454 dataset undermined our ability to test this hypothesis, although we made several other interesting observations. Notably, despite the vas t disparity in the depth of the coverage between the 454 and Illumina assemblies, we observed unique, apparently real sequences within some of the 454 contigs. few days. However, the sequencing results always turn out to contain several hundred contigs. A multiplex PCR procedure is then needed to fi ll all of the gaps and to link the contigs into one full-length genome sequence [1][2][3][4][5][6][7][8][9][10]. The full-length prokaryotic genome sequence is the gold standard for comparative prokaryotic genome analysis. This study assessed pyrosequencing strategies by using a simulation with 100 prokaryotic genomes. Results Our simulation shows the following: fi rst, a single-end 454 Jr Titanium run combined with a paired-end 454 Jr Titanium run may assemble about 90% of 100 genomes into <10 scaff olds and 95% of 100 genomes into <150 contigs; second, the average contig N50 size is more than 331 kb (Table 1); third, the average single base accuracy is >99.99% (Table 1); fourth, the average false gene duplication rate is <0.7% (Table 1); fi fth, the average false gene loss rate is <0.4% (Table 1); sixth, the total size of long repeats Genome Biology 2011, 12(Suppl 1) http://genomebiology.com/supplements/12/S1 (both repeat length >300 bp and >700 bp) is signifi cantly correlated to the number of contigs (Table 4); and, seventh, increasing the read length of a pyrosequencing run could improve the assembly quality signifi cantly (Table 1-3). Conclusions A single-end 454 Jr run combined with a paired-end 454 Jr run is a good strategy for prokaryotic genome sequencing. This strategy provides a solution to producing a high-quality draft genome sequence of almost any prokaryotic organism, selected at random, within days. It could be the fi rst step to achieving the full-length genome sequence. It also makes the subsequent multiplex PCR procedure (for gap fi lling) much easier, aided by the knowledge of the orders/orientations of most of the contigs. As a result, large-scale full-length prokaryotic genome-sequencing projects could be fi nished within weeks. Background A recent genome-wide association study (GWAS) identifi ed a single nucleotide polymorphism, rs8102137, located 6 kb upstream of the cyclin E1 gene (CCNE1) on chromosome 19q12, as a risk factor for bladder cancer (odds ratio (OR) = 1.13, P = 1.7 × 10 −11 ) [1]. CCNE1 encodes a cell cycle protein that regulates cyclin-dependent kinases and is therefore an important cancer susceptibility gene. Methods This study used 42 bladder tumor samples and 41 normal bladder tissue samples (24 matched normal-tumor pairs), HeLa cells and several prostate and bladder cancer cell lines. Genotyping of rs8102137 in DNA and rs7257694 in both DNA and cDNA samples was performed using an allelic discrimination genotyping assay. TaqMan and SYBR Green assays were used to measure the expression of the diff erent CCNE1 isoforms. The CCNE1 isoforms were cloned into a pFC14A (HaloTag) CMV Flexi Vector. Protein expression of CCNE1 isoforms in normal and tumor bladder tissues and transfected cells was analyzed by western blotting. Subcellular localization of recombinant CCNE1 splicing forms was analyzed by confocal microscopy. Results CCNE1 mRNA was expressed at a higher level in bladder tumors (n = 42) than in adjacent normal bladder tissue samples (n = 41, 3.7fold, P = 2.7 × 10 −12 ). However, no association was found between mRNA expression level and the genotype of rs8102137. We observed strong allelic expression imbalance for a synonymous coding variation located in the last exon (rs7257694, Ser390Ser), which is in high linkage disequilibrium with rs8102137 (normal bladder tissue samples, n = 41, D' = 1.0, r 2 = 0.815; HapMap CEU samples, n = 60, D' = 0.95, r 2 = 0.68). In normal and tumor tissue samples heterozygous for both single nucleotide polymorphisms, the risk variant of rs8102137 was associated with lower expression of allele T of rs7257694 (normal samples, P = 2.2 × 10 −4 ; tumor samples, P = 1.11 × 10 −10 ). Western blotting analysis of bladder tissue and prostate cell line lysates revealed that the allelic expression imbalance is likely to be related to two CCNE1 protein isoforms that showed a diff erential pattern of expression dependent on the rs8102137 and rs7257694 genotype. We have cloned the alternative splicing forms of CCNE1 and are currently evaluating their functional relevance. Conclusions Our results suggest that bladder-cancer-associated genetic variants of the CCNE1 gene might contribute to altered cell cycle regulation, owing to diff erential mRNA splicing producing diff erent protein isoforms of CCNE1. Background Metagenomics has allowed the study of a wide range of microbial communities, from those within the sea [1,2] to those of the human body [3]. Increasingly, de novo assembly is the fi rst step in the analysis of these metagenomic samples. As the targets have increased in complexity, computational tools have started to emerge [4,5] to address the challenges presented by the assembly of these datasets. Although the targets and analyses have become more complex, the means of presenting the results has remained the same: a multi-FASTA text fi le. This presentation hides the variation that is present in the sampled biological community. The ability to navigate and view the complexity of a genomic sample may help drive novel biological insights. Here, we present a graphical visualization tool that allows the visual inspection of genome assembly graphs and the characterization of the genomic variation that is present in these graphs (that is, the diff erences between two or more related haplotypes commonly found in metagenomes or higher eukaryotes). Methods Our software, Scaff Viz [6], is open source and was developed as a plug-in for the Cytoscape graph viewer package [7,8]. Our assembly view represents assembly metadata within node/edge attributes. For example, node height corresponds to coverage (the amount of oversampling of a sequence), and node width is proportional to the length of the sequence. We support assemblies from Celera Assembler [9], Newbler [10], Bambus 2 and MetAMOS. The creation and initialization of Cytoscape objects is abstracted to allow a developer to easily add new assembly result formats without knowledge of Cytoscape's API. We developed a layout algorithm based on information from the assembler on node position, orientation and length. Scaff Viz allows users to show (or hide) an arbitrary subset of nodes. The viewer can also output genome sequence that corresponds to any subset of the graph, including all alternative sequences present in all selected subpaths. We believe that this representation may prove to be instrumental in fi nding and characterizing structural variants such as alternative genes, alternative regulatory units or mobile genomic elements. Results We evaluated the performance of Scaff Viz on seven datasets of varying size and complexity. We report that the run time is approximately linear with respect to the number of elements in the graph (nodes + edges). The memory scales linearly with respect to the number of nodes. Extrapolating from these factors, a graph of 250,000 contigs can be opened in approximately 2 minutes using approximately 2.5 GB of memory. Scaff Viz is scalable to large graphs and can be run on a laptop. Conclusions We have developed a novel open-source assembly graph viewer, Scaff Viz, as a plug-in for Cytoscape. Scaff Viz supports the output of several popular assembly programs and is scalable to large metagenomic assemblies on a laptop. Most of the DNA viruses in the gastrointestinal tract are phages, which infect bacterial hosts. Despite phages being the most abundant organisms on Earth, as well as extremely active players in the global ecosystem, much remains unknown about how they function in their natural environments. Advances in whole genome sequencing technologies have generated a large collection of hundreds of phage genomes, allowing deep insight into the genetic evolution of phages, and metagenomics technologies seem to promise more rewarding glimpses into their life cycles and community structures. Recently, we developed an automated approach to assemble a collection of orthologous gene clusters of double-stranded DNA phages (phage orthologous groups, or POGs). This approach follows the well-known clusters of orthologous groups (COGs) framework to identify sets of orthologs by examining top-ranked sequence similarities between proteins in complete genomes without the use of arbitrary similarity cutoff s, and it thus represents a natural system for examining fast-evolving and slow-evolving proteins alike. This automated approach was designed to keep pace with the rapid and accelerating growth of whole genome information from sequencing projects. In particular, we employ a faster graph-theoretical COG-building algorithm that vastly improves our ability to deal with larger numbers of genomes (N) by reducing the worst-case complexity from O(N 6 ) to O(N 3 × log N). This system encompasses more than 2,000 groups from the almost 600 known phage genomes deposited at the National Center for Biotechnology Information and is in the process of being expanded to include singlestranded DNA phages and single-and double-stranded RNA phages. Using this approach, we found that more than half of the POGs have no or very few evolutionary connections to their cellular hosts, indicating that these phages combine the ability to share and transduce the host genes with the ability to maintain a large fraction of unique, phage-specifi c, genes. Such genes are useful for targeted research strategies: for example, as diagnostic indicators and fundamental units of systems biology studies. We employed this set of phage-specifi c genes to probe the composition of several oceanic metagenomic samples. Although virus-enriched samples indeed contain more homologous matches to phage-specifi c POGs than a full metagenomic sample also containing cellular DNA, the total gene repertoire of the marine DNA virome is dramatically diff erent from that of known phages. In particular, it is dominated by rare genes, many of which might be contained within viruslike entities such as cellular gene transfer agents rather than true viruses. This result might suggest the necessity of radically rethinking what constitutes the 'virus world' , because the major component of (marine) viromes could be gene transfer agents that encapsidate bacterial and archaeal genes. Background Recent genome-wide association studies have led to the reliable identifi cation of single nucleotide polymorphisms (SNPs) at a number of loci associated with an increased risk of developing specifi c common human diseases. Each such locus implicates multiple possible candidate SNPs as being involved in the disease mechanism, and determining which SNPs actually contribute, and by what mechanism, is a major challenge. A variety of mechanisms may link the presence of a SNP to altered in vivo gene product function and hence contribute to disease risk. We have analyzed the role of one of these mechanisms, nonsynonymous SNPs (nsSNPs) in proteins, for associations found in the Wellcome Trust Case-Control Consortium (WTCCC) study of seven common diseases [1] and the follow-up work. Methods Using HapMap data and linkage disequilibrium information, we identifi ed all possible candidate SNPs associated with increased disease risk. We then applied two computational methods [2,3], based on analysis of protein structure and sequence, to determine which of these SNPs has a signifi cant impact on in vivo protein function (SNPs3D) [4]. Results Several of these disease-associated loci were found to be linked to one or more high-impact nsSNPs. In some cases, these SNPs are in wellknown proteins (such as human leukocyte antigens). In other cases, they are in less well-established disease-associated genes (for example, MST1 for Crohn's disease), and in yet others, they are in proteins that have been poorly investigated (for example, gasdermin B, also for Crohn's disease). Approximately 55% of these disease-associated loci have at least one nsSNP, and about 33% of them have at least one high-impact nsSNP in those regions. Conclusions Together, these data suggest a signifi cant role for nsSNPs in Background A major goal of metagenomics is to characterize the taxonomic composition of an environment. The most popular approach relies on 16S rRNA sequencing; however, this approach can generate biased estimates owing to diff erences in the copy number of the gene, even between closely related organisms, and owing to PCR artifacts. In addition, the taxonomic composition can also be determined from metagenomic shotgun sequences by matching reads against a database of reference sequences. One major limitation of the computational methods that have been used for this purpose is the use of a universal classifi cation threshold for all genes at all taxonomic ranks. Methods We present a novel taxonomic profi ler for metagenomic sequences, MetaPhyler [1], which relies on 31 phylogenetic marker genes as a taxonomic reference. Because genes can evolve at diff erent rates and because shotgun reads contain gene fragments of diff erent lengths, we propose that better classifi cation results can be obtained by tuning the taxonomic classifi er to the length of the gene fragment, to a particular gene and to the taxonomic rank. Our classifi er uses diff erent thresholds for each of these parameters, and these thresholds are automatically learned from the taxonomic structure of the reference database. Results We have randomly simulated about 300,000 DNA sequences of 60 bp and about 70,000 DNA sequences of 300 bp from phylogenetic marker genes. Table 1 shows the performance of the phylogenetic classifi cations from MetaPhyler, PhymmBL [2], MEGAN [3] and WebCARMA [4]. The query sequence itself was removed from the reference dataset when running the programs. The sensitivity of MetaPhyler is signifi cantly higher than that of the other tools in all situations because our classifi er is explicitly trained at each taxonomic rank. In addition, we have created a simulated metagenomic sample comprising fi ve genomes. Table 2 shows the taxonomic profi les estimated by diff erent approaches. In this setting, MetaPhyler also outperforms the other approaches by more accurately reconstructing the true taxonomic distribution. Conclusions We have introduced a novel taxonomic classifi cation method for analyzing the microbial diversity from whole metagenome shotgun sequences. Compared with previous approaches, MetaPhyler is more Results We identifi ed a family with a previously undescribed lethal X-linked disorder of infancy comprising a distinct combination of an aged appearance, craniofacial anomalies, hypotonia, global developmental delays, cryptorchidism, cardiac arrhythmia and cardiomyopathy. We used X-chromosome exon sequencing and a recently developed probabilistic disease-gene discovery algorithm to identify a missense variant in NAA10, which encodes the catalytic subunit of the major human amino-terminal acetyltransferase (NAT; also known as hNaa10p). More recently, we became aware that a parallel eff ort on a second unrelated family converged on the same variant. The absence of this variant in controls, the amino acid conservation of this region of the protein, the predicted disruptive change and the co-occurrence in two unrelated families with the same rare disorder suggest that this is the pathogenic mutation. We confi rmed this by demonstrating that the mutant hNaa10p had signifi cantly impaired biochemical activity, and we therefore conclude that a reduction in acetylation by hNaa10p causes this disease. Conclusions This is one of the fi rst uses of next-generation sequencing to identify the genetic basis of a previously unrecognized X-linked syndrome. It is also the fi rst evidence of a human genetic disorder resulting from direct impairment of amino-terminal acetylation, one of the most common protein modifi cations in humans. We have also demonstrated that a probabilistic disease-gene discovery algorithm (VAAST) can readily identify and characterize the genetic basis of this syndrome. P14 Abstract not submitted for online publication. Background Genome-wide association studies (GWAS) have identifi ed a single nucleotide polymorphism, rs2294008 C/T, within the prostate stem cell antigen (PSCA) gene as a risk variant for bladder cancer [1]. PSCA is a glycosyl phosphatidylinositol (GPI)-anchored cell surface protein from the Ly-6/Thy-1 family of cell surface antigens. PSCA overexpression has been reported in bladder, prostate and pancreatic tumors. The risk allele (T) of rs2294008 creates a novel translation start site and extends the PSCA leader peptide sequence by 11 amino acids. Methods The mRNA expression in 42 bladder tumor samples and 39 adjacent normal bladder tissue samples (24 matched normal-tumor pairs) was explored using genome-wide RNA sequencing and targeted PSCA mRNA expression assays. For allelic expression imbalance studies, genotyping of rs2294008 both in DNA and cDNA samples was performed using an allelic discrimination genotyping assay. Alternative allele-specifi c splicing forms of PSCA were cloned and transfected into several human cancer cell lines. The endogenous expression of PSCA protein and the expression pattern of the recombinant PSCA allelic isoforms in diff erent cancer cell lines were studied by western blotting, confocal microscopy and fl uorescence-activated cellsorting analysis. PSCA protein expression in normal and tumor bladder tissue samples was examined in relation to rs2294008 genotypes by using immunohistochemistry. Results PSCA mRNA was expressed at a 5.7-fold higher level in tumors than in matching normal bladder tissue samples (P = 0.0060). There was a strong allelic expression imbalance in tumor samples (P = 0.0020), based on 20 normal and 13 tumor samples that were heterozygous for rs2294008. PSCA mRNA expression was associated with the genotype of rs2294008 both in normal and tumor bladder tissue samples. Our preliminary data on the expression of recombinant allele-specifi c PSCA protein isoforms in transfected cells show a possible diff erence in the distribution of the cytoplasmic and membrane expression of these isoforms. Conclusions Our results suggest that the extension of the PSCA leader peptide by 11 amino acids, introduced by the risk allele (T) of rs2294008, may aff ect subcellular protein localization and the availability of functional GPI-anchored PSCA on the cell surface. These results may have clinical implications because antibodies that target cell-surface-expressed PSCA are in clinical trials for pancreatic and prostate cancer. . Surprisingly, all of the mutants, including 8-Δ, were viable and could withstand redox stresses; however, they were unable to activate or repress transcriptional events in response to hydrogen peroxide treatment, which was most evident in the 8-Δ mutant. In our work, network analysis was used to gain a better understanding of the biological networks whose gene expression is aff ected by these mutations. Methods Microarray data (provi ded by [1]) was processed for input into the Cytoscape plug-in jActiveModules. Active sub-networks for select mutants were identifi ed using all yeast interactions found in the Kyoto Encyclopedia of Genes and Genomes (KEGG) [2] as the background network (including protein-protein, metabolic and gene expression interactions). Nodes in each sub-network were input into the Database for Annotation, Visualization and Integrated Discovery (DAVID) [3] to identify which KEGG pathways were present. Results Two hundred and six genes appeared in one or more of the active sub-networks. Only seven genes were present in the sub-networks of all strains. These were a known oxidative stress-induced aldose reductase (GRE3), four putative aryl-alcohol dehydrogenases (AAD3, AAD6, AAD10 and AAD14), a mitochondrial aldehyde dehydrogenase (ALD4) and a xylulokinase (XKS1). All of the genes were upregulated on average by 6-to 12-fold in all strains, except for 8-Δ with a 1.5-fold average upregulation and 5Prx-Δ with a 3-fold average upregulation. Many metabolic pathways were aff ected by the knockouts; the pathway types aff ected depended on which peroxidase gene was knocked out. This result suggests that diff erent thiol peroxidases may have a signifi cant and specifi c impact on the regulation of metabolic pathways during oxidative stress. Surprisingly, the Gpx3-Δ active sub-network was similar to the Gpx1-Δ and Gpx2-Δ sub-networks. Gpx3 is known to sense hydrogen peroxide and pass that signal along to transcription factors; thus, it was expected that this subnetwork would diff er from that of the other Gpx mutants. Additionally, our results showed that amino acid metabolism, biosynthesis and degradation pathways were active in wild-type cells but were present in few mutant strains. Conclusions The results of this work indicate that thiol peroxidases, along with playing a key role in maintaining redox homeostasis, may also play a signifi cant role in the regulation of metabolic pathways in yeast, thus illuminating the global role that thiol peroxidases play in oxidative stress. Here, we present major improvements to the Metastats software and the underlying statistical methods. First, we describe new approaches for data normalization that allow a more accurate assessment of diff erential abundance by reducing the covariance between individual features implicitly introduced by the traditionally used ratio-based normalization. These normalization techniques are also of interest for time-series analyses or in the estimation of microbial networks. A second extension of Metastats is a mixed-model zero-infl ated Gaussian distribution that allows Metastats to account for a common characteristic of metagenomic data: the presence of many features with zero counts owing to undersampling of the community. The number of 'missing features' (zero counts) correlates with the amount of sequencing performed, thereby biasing abundance measurements and the diff erential abundance statistics derived from them. Using simulated and real data, we show that these methods signifi cantly improve the accuracy of Metastats. We also describe the addition of several new statistical tests to our code (including presence/absence and the corresponding odds ratio, and penetrance calculations) that improve the usability of our software in clinical practice. Background A recent genome-wide association study (GWAS) of bladder cancer identifi ed a single nucleotide polymorphism (SNP), rs11892031, within the UGT1A gene cluster on chromosome 2q37.1, as a novel risk factor. The UGT1A locus encodes nine UGT proteins, which belong to the phase II cellular detoxifi cation system. UGTs are functionally important for the detoxifi cation of aromatic amines, which are found in industrial chemicals and tobacco smoke and are known risk factors for bladder cancer. The UGTencoding genes have exons 2 to 5 in common but have diff erent fi rst exons, which defi ne the enzymatic activity and substrate specifi city of the gene products. Methods and results We sequenced all nine highly similar alternative fi rst exons for the UGT-encoding genes of up to 2,000 individuals. We identifi ed 26 known nonsynonymous and 17 known synonymous coding variants but no novel variants. Imputation based on the GWAS dataset, a combined reference panel of HapMap 3 and the 1000 Genomes Project, and a subset of GWAS samples genotyped for all of the identifi ed coding variants generated data for 1,170 SNPs within the whole UGT1A region. Of these markers, the strongest association was detected for an uncommon protective genetic variant that explained the original GWAS signal (odds ratio (OR) = 0.55, 95% confi dence interval (CI) = 0.44 to 0.69, P = 3.3 × 10 −7 in 4,035 cases and 5, 284 controls; D' = 0.96, r 2 =0.23 with rs11892031). No residual association in this region was detected after adjustment for this SNP. A typical genetic variant identifi ed by GWAS for a common disease is expected to be a common allele (>10% minor allele frequency) that increases the disease risk. We show that the novel associated variant is an uncommon protective allele (1.14% in cases and 2.5% in controls). Interestingly, the risk allele (G) is conserved in 33 species, whereas the protective allele (T) is a human-specifi c variant. Even though this SNP is a synonymous coding variant, we show its association with quantitative mRNA expression of a specifi c functional splicing form of UGT1A6, probably through an exonic splicing enhancer. Conclusions This study exemplifi es that uncommon protective genetic variants are unusual suspects that may play important but underestimated functional roles in complex traits. Background Horizontal gene transfers (HGTs) are pervasive in prokaryotes [1], being the routes of net-like evolution that collectively dominate the evolution of prokaryotes [2]. However, in eukaryotes, the eff ect of HGT has not been thoroughly analyzed, with the exception of the massive HGT from the endosymbionts [3]. Here, we report a comprehensive analysis of likely HGT events in diff erent groups of unikonts (Amoebozoa, Archamoebae, Mycetozoa, the Fungi/Metazoa group, Choanofl agellida, Fungi and Metazoa). Methods We analyzed the complete proteomes of 36 species of unikonts: 1 from the Archamoebae, 1 from Mycetozoa, 18 from Fungi, 13 from Metazoa and 1 from Choanofl agellida. These proteomes were manually selected to widely represent the unikont supergroup. Initial pre-candidate genes were obtained by analyzing each proteome using the DarkHorse program [4]. The program BLASTClust was then used to make clusters of putative unique transfer events at the origin of the diff erent groups of unikonts. These clusters were separated into two groups: group I candidate clusters (clusters with no eukaryotic representative other than the unikont group analyzed), and group II candidate clusters (clusters with representatives from prokaryotes, the unikont group analyzed and other eukaryotes). Sequences from group I candidate clusters were analyzed using BLAST versus nr and RefSeq databases, compared with the clusters of orthologous groups for eukaryotic complete genomes (KOGs) [5] and manually curated to remove false positives that result from bacterial contamination of the genomic DNA. Group II candidate clusters were analyzed using a series of automatic, conservative fi lters to assess the quality of the candidates. Finally, all clusters were phylogenetically analyzed to defi ne the fi nal candidates and to infer putative donors. Results Using this methodology, we detected numerous probable HGT events from prokaryotes (mainly Bacteria) to unikonts. These events are not distributed uniformly throughout the evolution of unikonts: for example, almost all HGTs detected in Amoebozoa occurred after the divergence of Archamoebae and Mycetozoa. Importantly, we also detected many HGT events from Bacteria to Fungi, Choanofl agellida and MetazoaConclusions Although HGTs are not as pervasive in eukaryotes as in prokaryotes, the amount of HGT detected in this study suggests that the acquisition of genes from Bacteria played a major role in the evolution of the unikonts. Background Most studies exploring cancer progression have focused on the infl uence of individual genes, and few eff orts have investigated the eff ects of interactions between genes within the genome. Our hypothesis is that cancer cells thrive by exploiting combinations of genes, in fact by exploiting networks of genes that both protect the cell against destruction and enhance its survival. We believe that these networks involve genes that tend to be coordinated in their copy number alterations, even when they are located at a distance in the genome. Radiation hybrid (RH) cells have a random assortment of genes as triploid rather than diploid. Our recent work studying genetic networks in libraries of RH cells has elucidated key survival-enhancing interactions with high specifi city [1]. Because of the hardiness of the RH clones, statistically signifi cant patterns of co-inherited, unlinked triploid gene pairs pointed to the cell survival mechanism. We identifi ed more than 7.2 million signifi cant interactions at single-gene resolution using the RH data. Methods Our work with the RH data provided the rationale for an investigation of cancer survival networks, in particular for glioblastoma multiforme, a formidable brain cancer for which extensive datasets are available but few treatment options. We investigated correlated patterns of copy number alterations for distant genes in glioblastoma multiforme tumors using the same method we employed to construct the RH survival network. Public data were analyzed from 301 glioblastomas that had been assessed for copy number alterations using array comparative genomic hybridization [2]. Results The glioblastoma and RH survival networks overlapped signifi cantly (P = 3.7 × 10 −31 ). We therefore exploited the high-resolution mapping of the RH data to obtain single-gene specifi city in the glioblastoma network. The combined network features 5,439 genes and 13,846 interactions (false discovery rate (FDR) <5%) and suggests novel approaches to therapy for glioblastoma. For example, although the epidermal growth-factor receptor (EGFR) oncogene is frequently activated in glioblastoma, EGFR inhibitors have limited therapeutic effi cacy [3]. In the combined glioblastoma survival network, there are 46 genes that interact with EGFR, of which ten (22%) happen to be targets of existing drugs. This observation suggests that a fl anking attack strategy that strikes at both EGFR and its partner genes in the glioblastoma survival network may be an eff ective approach to treating these tumors. Conclusions By elucidating a genetic survival network for glioblastoma, we gained insight into the mechanisms of proliferation of this cancer and opened up new avenues for therapeutic intervention. Background Hundreds of diverse genetic loci have been linked to autism spectrum disorders (ASDs), making large-scale analysis essential for understanding the molecular events underlying the pathogenesis of these disorders. Our laboratory fi rst released the autism database AutDB in 2007 as a bioinformatics tool for systematic curation of all known ASD candidate genes [1][2][3]. AutDB was designed with a systems biology approach, integrating genetic entries within the Human Gene module with corresponding behavioral, anatomical and physiological data in the Animal Model module. In June 2011, we released a new Protein Interaction (PIN) module of AutDB, which serves as a comprehensive, up-to-date resource on the direct protein interactions of ASD-linked genes. Methods To curate the PIN module, our researchers utilize a multi-level annotation model to systematically search, collect and extract information entirely from published, peer-reviewed scientifi c literature. Although we initially consult public molecular interaction databases (HPRD and BioGRID) and commercial molecular interaction software (Pathway Studio, version 7.1), every interaction is manually extracted and verifi ed by evaluating the primary reference articles from PubMed. Our manual curation has proved critical for accurate annotation, because these references were the second largest source of references for the initial PIN dataset, providing more interactions than both HPRD and Pathway Studio. Each ASD gene entry within the PIN module is presented as a multi-level display, with interactive graphical and tabular views of its corresponding interactome. Results The initial PIN dataset includes interactomes for 86 ASD candidate genes, with a total of 1,311 direct protein interactions garnered from 533 unique primary references. These interactomes are composed of 6 interaction types and 13 species, documented by 402 distinct pieces of evidence. Our researchers will expand and maintain the data content of the PIN module with systematic updates. Conclusions We have created an integrated bioinformatics tool that can be used for the large-scale analysis of the biological relationships among ASD candidate genes. Such network analysis is envisioned to provide a framework for identifying the key molecular pathways underlying ASD pathogenesis, potentially leading to the development of novel drug therapies. Background Bladder cancer is the 9th most common cancer worldwide and the 13th most common cancer-related cause of death. Bladder cancer frequently recurs after the removal of primary carcinomas. This recurrence leads to repeated surgeries and long-term treatment and surveillance, making it the most expensive type of cancer to treat. Genetic factors and environmental factors such as cigarette smoking and occupational exposure to aromatic amines are linked to bladder cancer risk. Genomewide association studies (GWAS) for bladder cancer have identifi ed multiple genetic variants within genes and regions, including TP63, TERT-CLPTMIL and 8q24.21, to be highly associated with disease risk. Whole transcriptome sequencing (RNA-Seq) is a revolutionary tool for generating a large amount of qualitative and quantitative information, thus helping to explore known and novel transcripts, splicing forms and fusion genes. Methods To understand the genetic and genomic landscape of the GWAS susceptibility regions, we investigated and characterized the entire transcriptome of normal and tumor bladder tissue samples by using powerful massively parallel RNA sequencing. We used an Illumina HiSeq 2000 instrument to sequence six paired samples of normal and tumor bladder tissues. For each of the samples, we generated 50 Gb of 100-bp reads to represent the whole transcriptome. Results Using the Bowtie/TopHat and Samtools packages, we successfully aligned approximately 80% of the total sequence reads against the human genome reference sequence (build 19). Our analysis sought to identify alternative splicing forms, novel exons, non-coding transcripts and chimeric fusion events. Total levels of mRNA in normal and tumor samples were evaluated by Cuffl inks analysis based on the Ensembl transcripts database. Multiple splicing isoforms were identifi ed for some of the GWAS susceptibility genes, and some of these isoforms were diff erentially expressed between the tumor and normal samples. We found that novel transcripts and non-coding RNAs corresponding to gene desert regions such as 8q24 were abundantly expressed. Our next step will focus on validation of these diff erentially expressed genes and novel transcripts by using quantitative RT-PCR on independent samples. Conclusions Using RNA-Seq, we explored transcripts corresponding to candidate regions identifi ed by bladder cancer GWAS. Some of these transcripts demonstrated splicing variability and diff erential levels of expression between normal and tumor tissue samples, which might be of importance for bladder cancer. Background Recent genome-wide association studies (GWAS) have identifi ed multiple genetic variants associated with the risk of developing prostate cancer (PrCa). At least ten PrCa-associated single nucleotide polymorphisms (SNPs) are located within a gene-poor region on chromosome 8q24, but the functional mechanisms of each of these variants remain unknown. Normal prostate development, as well as tumor initiation and progression, greatly depends on the androgen receptor (AR) and its ligands, testosterone and 5α-dihydrotestosterone. We hypothesized that genetic variants associated with PrCa risk might be important owing to their eff ects on AR-binding sites. Methods and results We comprehensively explored 11 PrCa GWAS published as of July 2011 in the National Human Genome Research Institute's GWAS database [1] and in PubMed [2]. We selected ten SNPs from the 8q24 region that were signifi cantly and consistently associated with PrCa in Caucasian datasets (P < 5 × 10 −7 ). By querying the CEU 1000 Genomes Project panel, we generated a list of 224 SNPs in high linkage disequilibrium (r 2 > 0.8) with the ten selected GWAS SNPs. Of all of the SNPs on this list, six variants were located in the regions identifi ed as AR-binding sites, based on AR chromatin immunoprecipitation (ChIP)-Seq data from the University of California, Santa Cruz's genome browser [3]. To test for diff erential binding of AR to alleles of the six SNPs, we developed a protocol for quantitative multiplex allele-specifi c ChIP (AS-ChIP) assays. Confi rmatory AS-ChIP with AR-specifi c antibodies in the LNCaP cell line showed that fi ve of these SNPs were heterozygous in the LNCaP cell line, and four of them showed statistically signifi cant allele-specifi c diff erences in AR binding (P-value range = 0.0005 to 0.04, based on four biological replicates of AS-ChIP). Background Metagenomics has opened the door to unprecedented comparative and ecological studies of microbial communities, ranging from the sea [1] to the soil (the terragenome) to within the human body [2,3]. Most analyses begin with assembly, as the short reads that are characteristic of most datasets severely limit the ability to classify the data taxonomically [4][5][6][7] and require considerable computational resources to perform comparative analyses (such as BLAST against public databases). In addition, given that many sequences are likely to be from novel organisms, classifi cation methods relying on databases fail to acknowledge most of the novel species present in the dataset. In an attempt to move away from reference-based analysis, computational tools based on promising algorithmic and statistical methods for metagenomic de novo assembly have recently started to emerge [8,9]. However, to date, they either are ill-suited to large datasets or have yet to off er signifi cant improvements over existing genome assemblers that were not designed for metagenomic assembly. Methods Here, we describe MetAMOS [10], an open-source, modular assembly pipeline built upon AMOS and tailored specifi cally for metagenomic next-generation sequencing data. MetAMOS is the fi rst step toward a fully automated assembly and analysis pipeline, from mated reads (Illumina and 454) to scaff olds and ORFs. Currently, MetAMOS has support for four assemblers (SOAPdenovo [11], Newbler, CABOG and Minimus [12]), three annotation methods (BLAST, PhymmBL and MetaPhyler), two metagenomic gene prediction tools (MetaGeneMark and Glimmer-MG) and one unitig scaff older engineered specifi cally for metagenomic data (Bambus 2). We also provide a novel graph-based algorithm to propagate annotations rapidly to all contigs in an assembly using, for example, only the largest contigs or contigs with high-confi dence classifi cation. MetAMOS has three principal outputs: subdirectories containing FASTA sequence of the contigs/scaff olds/ variant motifs belonging to a specifi ed taxonomic level, a collection of all unclassifi ed/potentially novel contigs contained in the assembly, and an HTML report with detailed assembly statistics and summary charts. Results and conclusions We compared MetAMOS with other metagenomic assembly tools (Meta-IDBA and Genovo) and with genome assemblers that have previously been used with metagenomic data (CA-met and SOAPdenovo). We used both a mock/artifi cial dataset generated for the Human Microbiome Project (HMP) project and real metagenomic samples from the HMP and its European counterpart (MetaHIT). On the mock dataset, MetAMOS compares favorably to existing metagenomic and genomic assemblers with respect to several validation metrics that take into account contig accuracy in addition to size. On the real dataset, MetAMOS also outperforms the existing software. These improvements can largely be attributed to heavy reliance on Bambus 2 and to assembly verifi cation techniques that help identify and remove potentially chimeric contigs while running the pipeline. In terms of biology, we were able to report several novel variant motifs that would be challenging at best to identify and extract from the output of other methods. In addition, much emphasis was placed on making MetAMOS compatible with a variety of next-generation sequencing technologies, genome assemblers and annotation methods, making the pipeline highly customizable for the beginner and advanced bioinformatics user alike. Gaucher disease is the most common lysosomal storage disorder. It results from an inherited defi ciency of the enzyme glucocerebrosidase (GBA); accumulation of the substrate of this enzyme has many clinical manifestations. The mutation spectrum in Indian patients with Gaucher disease Since the discovery of the GBA gene, more than 200 mutations have been identifi ed, but only a handful of mutations are recurrent (L444P, N370S, IVS2, D409H and 55Del). To determine the spectrum of mutations in the Indian population, we performed mutational screening in children with Gaucher disease. Twenty-four patients from twenty families were enrolled in this study, after written informed consent was obtained. The diagnosis of Gaucher disease was based on mandatory clinical and biochemical analysis. An initial screening for fi ve common mutations was carried out using PCR-RFLP. Patients who were negative for common mutations were screened by sequencing exons 9 to 11 (a mutation hotspot region) [1]. We identifi ed common mutations (L444P, N370S, IVS2 and D409H [2], and 55Del [3]) in approximately 50% of the patients. L444P (c.1448T>C) was the most frequently identifi ed, followed by D409H in our patients. Western data shows that N370S is the most common mutation in Romanian patients [4]. One polymorphism (E340K) was identifi ed in two patients who were compound heterozygotes for A456P/R463C and S237F/A269P, respectively. Our data highlight the spectrum of mutations that lead to Gaucher disease in the Indian population. Background Given diff erential gene expression data across divergent mutant strain arrays of two enzyme subgroups, it would be logical to segregate by protein group ablation (PGA). Discrete correlate summation (DCΣ) was utilized to examine the diff erential eff ects of a hydrogen peroxide stressor on discrete and total yeast knockouts of the genes encoding glutathione peroxidase (Gpx) and peroxiredoxin (Prx), both groups starting from the wild-type (WT) strain [1]. While the half-life of the total Gpx knockout mutant is intermediate between that of the WT and the transient total Prx knockout mutant, the distribution of passage number of the various mutant strains can be separated into two groups independent of Gpx and Prx state. Based on half-viability, totalPrx <<<< nPrx << Gpx3 = Tsa1 < totalGpx < mPrx <<< Gpx1 < Gpx2 << Ahp1 = WT <<< Tsa2 (P < 0.0005, two tailed t-test, n = 5, 6). DCΣ was also employed for the boundary between robust and gracile cultures. The aim of this study was to fi nd the characteristic response of the transcriptome, from the perspective of PGA versus strain viability (SV). Methods DCΣ is a method used to score variables that can be classifi ed into two groups [2]. It is a composite score of a gene's mean group change and overall interaction diff erence relative to all others tested. Transcripts were included in this analysis only if the values for all conditions passed microarray quality control and were present in the Kyoto Encyclopedia of Genes and Genomes (KEGG) network [3]. Randomly sorted edges were sampled for comparison (P < 0.001, two tailed t-test, n = 8,372). Edges that were sorted on average DCΣ score and grouped by biological process yielded a distinctive topology (P < 1e-85, two tailed t-test, n = 8,372). The identifi ed transcripts were subjected to functional annotation in the Database for Annotation, Visualization and Integrated Discovery (DAVID) [4]. Results Application of DCΣ to the individual and complete knockouts of Gpx (3 genes) and Prx (5 genes) identifi ed 92 transcripts based on PGA and 43 based on SV, with a 13 gene overlap (corresponding to the proteins Arg1p, Aah1p, Ade17p, Pgm2p, Cat2p, Cdd1p, Mae1p, Arg3p, Nma2p, Ole1p, Cta1p, Spb1p and Cds1p). Functional annotation analysis of the 92 PGA transcripts identifi ed the following functions: pyrimidine metabolism, steroid biosynthesis, purine metabolism, RNA polymerase and terpenoid backbone biosynthesis. Ergosterol biosynthesis, gluconeogenesis and transcription from Pol I/III promoters were major biological process categories for this set. Interestingly, terpenoids feed into the steroid pathway, which results in the vitamin D2 precursor ergosterol. Analysis of the 43 SV transcripts identifi ed starch and sucrose metabolism, butanoate metabolism, and fructose and mannose metabolism. Stress response was the key biological process for this arm of the study. No functional annotations were statistically signifi cant for the common genes. Transcripts identifi ed by PGA of either the Gpx-or Prxencoding genes tend toward transcriptional control mechanisms, whereas SV-associated transcripts track with metabolic necessities. of a disease or on the underlying mechanisms. Many studies have shown that variations in gene expression among individuals, as well as among cell types, contribute to phenotype diversity and disease susceptibility. Recent genome-wide expression quantitative trait loci (eQTL) association (GWEA) studies have provided information on genetic factors, especially SNPs, that are associated with gene expression variation. These expression-associated SNPs (exSNPs) have already been utilized to explain some results of GWAS for diseases, but interpretation of the data is handicapped by low reproducibility of the genotype-expression relationships. Methods To address this problem, we established several gold standard sets of high-reliability exSNPs based on multiple occurrences in diff erent GWEA studies in various human populations and cell types. We then related these data to results from GWAS for diseases, to fi nd a set of disease-associated loci that are likely to have an underlying expression mechanism. HapMap linkage disequilibrium data were utilized to allow the comparison of GWEA results from studies that employed diff erent microarray SNP sets. Results We integrated the current gold standard data with SNPs in diseaseassociated loci from the Wellcome Trust Case-Control Consortium (WTCCC) GWAS of seven common human diseases. Approximately one-third of these disease-associated loci in the WTCCC GWAS were found to be consistent with an underlying expression change mechanism. Comparing separate gold standard sets for Caucasian (CEU), African (YRI) and Asian (ASN) populations also allowed us to investigate which exSNPs contribute to population-specifi c eQTLs. Conclusions Use of the gold standard set of SNP-expression relationships has enabled us to more reliably determine the role of expression changes in common human diseases. Eukarya-specifi c r-proteins [1]. Despite the high sequence conservation of r-proteins, the annotation of r-protein genes is often diffi cult because of their short lengths and biased sequence composition. Methods To perform a comprehensive survey of prokaryotic r-proteins, we developed an automated computational pipeline for the identifi cation of r-protein genes and applied it to 995 completely sequenced bacterial genomes and 87 archaeal genomes available in the RefSeq database. The pipeline employs curated seed alignments of r-proteins to run position-specifi c scoring matrix (PSSM)-based BLAST searches against six-frame genome translations, thus overcoming possible gene annotation errors. Likely false positives are identifi ed using comparisons against the original seed alignments. Results In the course of this analysis, we gained insight into the diversity of prokaryotic r-protein complements, such as missing and paralogous r-proteins and distributions of r-protein genes among chromosomal partitions. A phylogenetic tree was constructed from a concatenated alignment of 50 almost-ubiquitous bacterial r-proteins. The topology of the tree is generally compatible with the current high-level bacterial taxonomy, although we detected several inconsistencies, possibly indicating uncertain or erroneous classifi cation of the respective bacteria. Similarly, a concatenated alignment of 57 ubiquitous archaeal proteins was used for an archaeal phylogenetic tree reconstruction. In both Bacteria and Archaea, the patterns of the presence/absence of non-ubiquitous r-proteins suggest several independent losses and/or gains of these proteins. According to parsimony reconstruction, three bacterial and fi ve archaeal r-proteins do not appear to be ancestral. Remarkably, all fi ve non-ancestral archaeal r-proteins are present in Eukarya. Conclusions Extended sets of prokaryotic r-proteins were created. Alignments of these sets may be used as new seed profi les for the identifi cation of r-proteins in new genomes and for comparative genomics studies. Broad clinical application of ultra-high-throughput sequencing is imminent. In a few notable cases, actionable information has been discovered from sequencing, and the number of such cases is likely to increase. At present, there are no widely accepted genomic standards or quantitative performance metrics. These are needed to achieve the confi dence in measurement results that is expected for sound, reproducible research and regulated applications. The National Institute of Standards and Technology (NIST) has been approached about considering development in this area by several commercial entities and regulatory agencies. There is great enthusiasm for translation of sequencing from the research community to clinical practice, and standards that can be used to inform confi dence in measurement results (for instance, through validation studies, profi ciency testing and routine quality assurance) may be an enabling factor in that goal. NIST is currently gathering input from the genomics community about which reference materials and data would be useful. For example, NIST and the Coriell Institute for Medical Research may develop genomic reference material from cell lines from families that have already been characterized by a variety of sequencing methods (for example, the cell line from which NA12878 DNA is derived). In addition, we may build synthetic DNA constructs to test specifi c questions about measuring diff erent types of variants or combinations of variants in diff erent genomic contexts. For example, we might create pairs of constructs with single nucleotide polymorphisms, indels and/or structural variants in GC-or AT-rich regions or repeat regions. To ensure the design of appropriate standards, we are interested in discussing the design and application of genomic reference materials with any interested parties. Background Protein-protein interactions (PPIs) are the most fundamental biological processes at the molecular level. The experimental methods for testing PPIs are time-consuming and are limited by analogs for many reactions. As a result, a computational model is necessary to predict PPIs and to explore the consequences of signal alterations in biological pathways. Reproductive control of the vector Anopheles gambiae using transgenic techniques poses a serious challenge. To meet this challenge, it would help to defi ne the biological network involving the male accessory gland (MAG) proteins responsible for successful formation of the mating plug [1]. This plug forms in the male and is transferred to the female during mating, hence initiating the PPIs in both sexes. As is the case in Drosophila melanogaster, a close relative of A. gambiae, some MAG proteins responsible for the formation of the mating plug have been shown to alter the post-mating behavior of females. Methods and results The STRING database for known PPIs was used to identify orthologs of A. gambiae proteins in Drosophila (Table 1). Twentyseven proteins are known to form the mating plug in A. gambiae, and 16 others were obtained as strings in the STRING database. Chromosome synteny comparisons for proteins with more than 50% identity between species were carried out using the Artemis Comparison Tool ( they are upregulated in the reproductive tissues of both sexes. To understand the processes involved in plug formation, the Reactome database was used, and the hub proteins were identifi ed in 49 of the 2,021 known processes in Drosophila. Twelve proteins were involved in the following processes: metabolism of proteins (8.8e-13), gene expression (2.0e-06), 3'-UTR-mediated translational regulation (7.7e-08), regulation of β-cell development (1.3e-06), diabetes pathways (6.8e-06), signal recognition (preprolactin) (5.0e-07) and membrane traffi cking (1.3e-03). Of the top 50 proteins, 92% had orthologs in A. gambiae, with one identifi ed in the mating plug and four others identifi ed as strings to AGAP009584, which is found in the mating plug. Acp29AB was identifi ed in the network and is known to induce post-mating responses in Drosophila, confi rming that the network is reproductive and giving an insight into the possible pathways involved. The CG9083 (Q8SX59) protein was ranked fi rst among the hub proteins but has no ortholog in A. gambiae. Interestingly, it has the same protein properties as the Plugin protein (AGAP009368) in A. gambiae, suggesting that Plugin may be the main protein in the PPI reproductive network in A. gambiae. The Whelan and Goldman (WAG) maximum likelihood tree evaluations of the plug proteins in A. gambiae and their orthologs in Drosophila showed that these proteins are involved in similar biological processes in both species, but the A. gambiae protein evaluation provided a better explanation for the expected process as it clustered in both pre-mated and post-mated PPIs. This DNA sequence motifs with the ability to form non-B (non-canonical) structures have been linked to a variety of regulatory and pathological processes. Although the exact mechanism is unknown, recent work has provided signifi cant evidence that non-B DNA structures may play a role in DNA instability and mutagenesis, leading to both DNA rearrangements and increased mutational rates, which are hallmarks of cancer. We have developed algorithms to identify a wide variety of non-B-DNA-forming motifs, including G-quadruplex-forming repeats, direct repeats and slipped motifs, inverted repeats and cruciform motifs, mirror repeats and triplex motifs, and A-phased repeats. After identifying these motifs in the mammalian reference genomes of human, mouse, chimpanzee, macaque, cow, dog, rat and platypus, the data were made publicly available in non-B DB [1]. However, it soon became apparent that it was not feasible to annotate the ever-growing list of genomic data and that it would be more eff ective to provide researchers with a systematic tool to predict these motifs in their own genomic data. Thus, the non-B DNA Motif Search Tool (nBMST) was created, and it is freely available online [2]. nBMST is a web interface that enables researchers to interactively submit any DNA sequence for searching for non-B DNA motifs. Once a user submits one or more DNA sequences in FASTA format, nBMST returns a comprehensive results page that contains the following: downloadable fi les in both a tab-delimited format and a generic feature format (GFF); a visualization, including PNG images; and a dynamic genome browser created using the Generic Genome Browser (GBrowse) [3] (version 2.0). Currently, nBMST allows fi le sizes of up to 20 MB of DNA sequence to be uploaded and stores the results for registered users for up to six months. In summary, the purpose of nBMST is to help provide insight into the involvement of alternative DNA conformations in cancer and other diseases, as well as into other potential biological functions. to date, data generated from GWAS have not been maximally leveraged and integrated with gene expression data to identify the genes and pathways associated with the most aggressive subset of breast cancers, triple-negative breast cancer (TNBC), which accounts for about 20% of all breast cancers. TNBC disproportionately aff ects young premenopausal women and has a higher mortality rate among African-American women. At present, no targeted treatments exist for TNBC, and standard chemotherapy remains the only therapeutic option. Integration of genetic mapping results from GWAS with gene expression data could lead to a better understanding of the genetic mechanisms underlying the molecular basis of the TNBC phenotype and to the identifi cation of potential biomarkers for the development of novel therapeutic strategies. Methods We mined data from 43 GWAS involving over 250,000 patients with breast cancer and 250,000 controls, reported through April 2011, to identify genetic variants (single nucleotide polymorphisms (SNPs)) and genes associated with risk for breast cancer. We then integrated GWAS information with gene expression data from 305 subjects (162 cases and 143 controls) to stratify TNBC and other breast cancer subtypes, as well as to identify functionally related genes and multi-gene pathways enriched by SNPs that are associated with risk for breast cancer and are relevant to TNBC. To stratify TNBC and to identify functionally related genes, we performed supervised and unsupervised analysis of gene expression data. We used a false discovery rate to correct for multiple testing. Pathway prediction and networking visualization was performed using Ingenuity Systems' software. Results Combining GWAS information with gene expression data, we identifi ed 448 functionally related genes that stratifi ed breast cancer subtypes into TNBC. A subset of these genes (130 genes) contained SNPs associated with risk for breast cancer; of these 130 genes, 122 correctly stratifi ed TNBC. Pathway prediction revealed multi-gene pathways enriched by SNPs that are signifi cantly associated with risk for breast cancer. Key pathways identifi ed include the p53, nuclear factor-κB, DNA repair and cell cycle regulation pathways. Conclusions Our results demonstrate that integrating GWAS information with gene expression data can be an eff ective approach for identifying biological pathways that are relevant to TNBC. These could be potential targets for the development of novel therapeutic strategies. P36 Abstract not submitted for online publication. The clinical reality of the post-genomic era is that we now face even more complex disease processes when provided with genomic information, including multifactorial genetic and genomic infl uences, and epigenetic and environmental factors. A useful example of the promise and perils of genomic technologies and information is breast cancer. By the mid-1990s, two genes (BRCA1 and BRCA2) had been identifi ed, accounting for approximately 5% of aff ected individuals. Since then, surprisingly few genetic breast cancer risk factors have been identifi ed to account for the remaining 95%. To effi ciently and cost-eff ectively identify individuals at high risk, a combination of information components is required: a patient-reported personal and family medical history; clinical data (for example, a physical exam, pathology results, laboratory test results and imaging); and genetic/genomic results. Gaining comprehensive data from all of these areas provides the best risk assessment and management options for patients. Furthermore, high quality patient and clinical information is essential for the accurate and reliable interpretation of genomic results. We have clinically implemented a platform that integrates all three informational components with multiple risk estimation models (REMs) to produce an eff ective automated method for risk-stratifying patients. Although this platform can be and has been applied to a wide range of genetic conditions, this presentation will use breast cancer to illustrate the approach. This system consists of three primary components: a secure The new and emerging fi eld of systems medicine, an application of systems biology approaches to biomedical problems in the clinical setting, leverages complex computational tools and high dimensional data to derive personalized assessments of disease risk. Systems medicine off ers the potential for more eff ective individualized diagnosis, prognosis and treatment options. The Georgetown Clinical & Omics Development Engine (G-CODE) is a generic and fl exible web-based platform that serves to allow basic, translational and clinical research activities by integrating patient characteristics and clinical outcome data with a variety of high-throughput research data in a unifi ed environment to enable systems medicine. Through this modular, extensible and fl exible infrastructure, we can quickly and easily assemble new translational web applications with both analytic and generic administrative features. New analytic functionalities specifi c to the needs of a particular disease community can easily be added within this modular architecture. With G-CODE, we hope to help enable the creation of new disease-centric portals, as well as the widespread use of biomedical informatics tools by basic, clinical and translational researchers, through providing powerful analytic tools and capabilities within easy-to-use interfaces that can be customized to the needs of each research community. This infrastructure was fi rst deployed in the form of the Georgetown Database of Cancer (G-DOC) [1], which includes a broad collection of bioinformatics and systems biology tools for analysis and visualization of four major omics types: DNA, mRNA, microRNA and metabolites. Although several rich data repositories for high dimensional research data exist in the public domain, most focus on a single data type and do not support integration across multiple technologies. G-DOC contains data for more than 2,500 patients with breast cancer and almost 800 patients with gastrointestinal cancer, all of which are handled in a manner that allows maximum integration. We believe that G-DOC will help facilitate systems medicine by allowing easy identifi cation of trends and patterns in integrated datasets and will hence facilitate the use of better targeted therapies for cancer. One obvious area for expansion of the G-CODE/G-DOC platform infrastructure is to support next-generation sequencing (NGS), which is a highly enabling and transformative emerging technology for the biomedical sciences. Nonetheless, eff ective utilization of these data is impeded by the substantial handling, manipulation and analysis requirements that are entailed. We have concluded that cloud computing is well positioned to fi ll these gaps, as this type of infrastructure permits rapid scaling with low input costs. As such, the Georgetown University team is exploring the use of the Amazon EC2 cloud and the Galaxy platform to process whole exome, whole genome, RNA-Seq and chromatin immunoprecipitation (ChIP)-Seq NGS data. The processed NGS data will be integrated into G-DOC to ensure that they can be analyzed in the full context of other omics data. Likewise, all G-CODE projects will simultaneously benefi t from these advances in NGS data handling. Through technology re-use, the G-CODE infrastructure will accelerate progress in a variety of ongoing programs that are in need of integrative multi-omics analysis and will advance our opportunities to practice eff ective systems medicine in the near future. Background In this work, we study the benefi ts of using optical maps to improve genome assembly. Many modern assembly algorithms rely on a de Bruijn graph paradigm to reconstruct a genome from short reads. Ambiguities caused by repeats within the genome cause the fi nal assembly to be broken up into many contigs, because the assembler does not have enough information to fi nd the one correct traversal of the graph. Optical mapping technology can be useful for determining the correct path in the de Bruijn graph, through providing estimates on the locations of one or more restriction enzyme patterns in the genome, thereby constraining the possible traversals of the graph to only those that are consistent with the map. A particular traversal that does not align well with the optical map can be discarded as incorrect. Previous work has shown how to construct optical maps [1,2] for scaff olding contigs [3]. Methods Our algorithm relies on a depth-fi rst search strategy. As the depthfi rst search proceeds and its corresponding sequence is extended, we check whether the resultant sequence would generate an optical map that matches the optical map of the genome. If the candidate in silico optical map matches the optical map of the genome, we proceed with the depth-fi rst search. Otherwise, we backtrack in the depth-fi rst search until we fi nd a path that covers the entire graph and whose sequence has an optical map that matches the optical map of the entire genome. Although the total number of paths in the de Bruijn graph can be exponential in the number of nodes and edges in the graph [4], a reference optical map can eff ectively prune the search space of paths. To improve performance, we start by fi nding edges in the de Bruijn graph that can be uniquely placed on the optical map. These edges, which we call landmark edges, can also help guide our depth-fi rst search. Although there may be multiple paths in the de Bruijn graph that can yield sequences with optical maps that match the genome's optical map, these paths all yield very similar sequences in most cases. An amalgamated risk estimation model (REM) and assay integration into future REMs Results Given modest assumptions about the errors in the optical map, initial simulations show that our algorithm is very eff ective at assembling bacterial genomes, given read lengths of 100 or longer. The majority of our assemblies match the original sequences used in our simulations very closely. We will also present the results of simulations aimed at measuring the eff ect of errors on the correctness of the reconstruction and at measuring how the choice of restriction enzymes can improve the sequence assembly. Conclusions Our work shows that optical maps can be used eff ectively to aid in genome assembly. We are currently extending our approach to handle much larger graphs and to tolerate higher amounts of mapping error. In our fi nal assembly, we would also like to be able to detect and mark regions that we are less certain about and regions that we are confi dent are correct. defi ne functional diversity in comparison to organismal ecology, including an example of microbial metabolism linked to specifi c organisms and to host phenotype (vaginal pH) in the posterior fornix. We provide profi les of 168 functional modules and 196 metabolic pathways that were determined to be specifi c to one or more niches within the human microbiome, including details of glycosaminoglycan degradation in the gut. Understanding how and why these biomolecular activities diff er among environmental conditions or disease phenotypes is, more broadly, one of the central questions addressed by high-throughput biology. We have thus developed the linear discriminant analysis (LDA) eff ect size algorithm (LEfSe) to discover and explain microbial and functional biomarkers in the human microbiota and other microbiomes. We demonstrate this method to be eff ective for mining human microbiomes for metagenomic biomarkers associated with mucosal tissues and with diff erent levels of oxygen availability. Similarly, when applied to 16S rRNA gene data from a murine ulcerative colitis gut community, LEfSe confi rms the key role played by Bifi dobacterium in this disease and suggests the involvement of additional clades, including the Clostridia and Metascardovia. A quantitative validation of LEfSe highlights a lower false positive rate, consistent ranking of biomarker relevance, and concise representations of taxonomic and functional shifts in microbial communities associated with environmental conditions or disease phenotypes. Implementations of both methodologies are available at the Huttenhower laboratory's website [1,2]. Together, they provide a way to accurately and effi ciently characterize microbial metabolic pathways and functional modules directly from high-throughput sequencing reads and, subsequently, to identify organisms, genes or pathways that consistently explain the diff erences between two or more microbial communities. This has allowed the determination of community roles in the HMP cohort, as well as their niche and population specifi city, which we anticipate will be applicable to future metagenomic studies. High-throughput sequencing (HTS) is an emerging technology that promises to deliver unparalleled information on genomic variations. As technology evolves and matures, and as a deeper understanding of this technology is gained, new and upgraded tools for analyzing HTS will become available and will need to be evaluated and validated. To facilitate this cumbersome task, we have developed an HTS validation framework into which both in-housegenerated synthetic datasets and well-characterized experimental datasets have been incorporated for controlled testing and evaluation of these analysis tools. Currently, the framework can be used to assess algorithms for short-read mapping, variant calling and RNA-Seq-derived gene expression measurements. The framework is deployed in the Amazon EC2 cloud so that it is available to the broader research community. Using our framework, researchers can further validate interfaced applications with preferred parameters, upload their own datasets for processing, and interface new applications with the framework for validation and comparison. We report the performance of several alignment, variant calling and RNA-Seq analytic tools that have been tested with our framework. We also provide feedback on the challenges and benefi ts of Amazon EC2 deployment. Cite abstracts in this supplement using the relevant abstract number, e.g.: Liu X, et al.: A high-throughput-sequence analysis infrastructure technology investigation framework for the evaluation of next-generation sequencing software. Genome Biology 2011, 12(Suppl 1):P48.
2017-08-03T02:22:18.360Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "a0595d9d2728f39168008caa99afc6da98740179", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/1465-6906-12-S1-P19", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "50b35a019348df8afe8838963f2d8e3aef2c5c01", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
6531400
pes2o/s2orc
v3-fos-license
Paroxysmal hemicrania as the clinical presentation of giant cell arteritis Head pain is the most common complaint in patients with giant cell arteritis but the headache has no distinct diagnostic features. There have been no published reports of giant cell arteritis presenting as a trigeminal autonomic cephalalgia. We describe a patient who developed a new onset headache in her fifties, which fit the diagnostic criteria for paroxysmal hemicrania and was completely responsive to corticosteroids. Removal of the steroid therapy brought a reemergence of her headaches. Giant cell arteritis should be considered in the evaluation of secondary causes of paroxysmal hemicrania; in addition giant cell arteritis needs to be ruled out in patients who are over the age of 50 years with a new onset trigeminal autonomic cephalalgia. Introduction Paroxysmal hemicrania (PH) is a rare primary headache disorder that typically begins in young adulthood with a mean age of 34 years and preferentially affects women. 1 It is one of the trigeminal autonomic cephalalgias (TACs) thus along with short lasting bouts of severe head pain, patients experience a multitude of cranial autonomic symptoms including lacrimation, conjunctival injection, nasal congestion/rhinorhea and the development of miosis and/or eyelid ptosis (Horner syndrome). 1 The other recognized TACs include cluster headache and SUNCT syndrome. The head pain of PH typically begins without warning, lasts 2 to 30 minutes in duration and can occur anywhere from 1 to 40 times per day. The pain of PH is severe in intensity, one sided and generally located in ocular, periorbital, temporal or upper facial regions. 1 Paroxysmal hemicrania is considered one of the indomethacin responsive headache disorders as the headaches are completed alleviated on indomethacin and return when indomethacin is tapered off and rarely if ever respond to any another other medication including other NSAIDs. Indomethacin responsiveness is actually required in the International Classification of Headache Disorders (second edition) to make the diagnosis of PH (Table 1). 1 Despite its rarity, a number of secondary causes of PH have been documented in the literature including intracranial tumors, infections, intracranial hypertension and aneurysms. 2 We now present the first ever case of PH as the clinical presentation of giant cell arteritis (GCA). Case Report A 56-year-old woman presented to the Emergency Department with a new type of headache beginning 6 weeks prior. She described a baseline every moment dull, persistent ache located to the left of vertex, and also reported daily short-lasting spikes of severe pain (10 out of 10 on VAS pain scale) occurring in the left temple and left parietal region which would occur intermittently throughout the day, lasting 15 min in duration. She would average between 5-10 attacks of pain exacerbation per day. These pain exacerbation periods were associated with cranial autonomic symptoms including left eyelid ptosis and left eye lacrimation. She also would experience migrainous associated symptoms including photophobia, blurred vision and nausea. In addition she also complained of short stabs of pain lasting 1-2 s in duration which would occur daily and also multiple times in a day and would mostly occur on the left side of her head but not in a specific location. She had a prior headache history for many years of intermittent more generalized headaches of mild intensity and without migrainous associated symptoms, which would alleviate within several hours of onset without medication. In addition to headache the patient on presentation felt overall very ill and lethargic. She did not however have jaw claudication by history. On examination the patient had pain to palpation over the left greater occipital nerve, trochlear notch and supraorbital notch. She had a bounding left superficial temporal artery pulse but had an absent temporal artery pulse on the right side. She had a left supraclavicular and left carotid bruit on neurovascular examination. The remainder of her general and neurologic examination was non-focal. Laboratory testing revealed a normal sedimentation rate (14 mm/h: normal range: 0-15 mm/h) and low sensitivity C-reactive protein level (3 mg/L: normal range 0-5 mg/L). Brain magnetic resonance angiography (MRA) suggested diffuse intracranial vessel stenosis involving the basilar artery and bilateral intracranial carotid arteries thought more compatible with diffuse atherosclerosis but central nervous system vasculitis was in the differential. She had multiple stroke risk fac-tors including hypertension, hyperlipidemia and she was a chronic smoker. Computed tomography angiography (CTA) was then completed to better define the arterial stenoses but this study did not denote any significant intracranial vessel abnormalities and there was no flow limiting arterial stenoses and nothing suggestive of vasculitis. However, CTA of the neck vessels demonstrated a very highgrade stenosis of the proximal left internal carotid artery, but no evidence of dissection. The patient's headache symptomatology was consistent with a diagnosis of PH with associated idiopathic stabbing headaches, but because of her age and her general sense of feeling ill a secondary cause of PH was considered. Rheumatology evaluated the patient and felt GCA was high in the differential because headache was the main presenting symptom of her illness, in addition she complained of stabbing headaches which are part of the presentation of GCA and she had an absent temporal artery pulse on exam. 3 The decision was made to treat the headache with corticosteroids rather than indomethacin because of the possible morbidity that could result with holding corticosteroids in the face of GCA. On oral prednisone 40 mg per day the patient had a dramatic improvement in her pain becoming headache free within 24 h. Rheumatology diagnosed her with giant cell arteritis based on her robust response to steroids, however a left temporal artery biopsy was completed and this did not demonstrate arteritis. At the time of the biopsy she had been on prednisone for 3 days. The prednisone was tapered because of the negative biopsy results but her headaches immediately returned to a presteroid state, and then again alleviated once a higher dose of prednisone was achieved. Rheumatology felt her underlying condition was still GCA even with a negative biopsy. Over several months time, she intermittently tried to taper down her steroid dose but the headaches would immediately return. Upon increasing the steroid dose, her headaches and autonomic symptoms would again resolve. She has been on corticosteroids now for 8 months and remains pain free. Discussion PH is one of the indomethacin responsive TACs. As more cases of this unique primary headache are seen in the clinic more secondary mimics of the condition are discovered. This is important, as many of the secondary TACs clinically are indistinguishable from the primary forms and in many instances secondary indomethacin sensitive headaches respond in the same manner to therapy as the primary subtypes. 4 Any new secondary condition, which has not been noted previously for the TACs, should be documented in the literature because it broadens the diagnostic work-up for these headaches. In this specific case it is important to now note that GCA may present as a TAC specifically PH. GCA has no definitive presentation in regard to headache. Head pain is the most common complaint in GCA patients but the headache can occur anywhere on the head, not just the temples, be mild to severe in intensity and be dull to throbbing in quality; thus very amorphous in its presentation. 3 If there is associated jaw claudication and an elevated sedimentation rate or CRP level, then the diagnosis is highly suggestive of GCA; however, GCA cannot be completely ruled out even in the absence of these factors. Identification of GCA is critical because of the morbidity that can result if the disease goes untreated including vision loss and stroke. The diagnosis of GCA is confirmed by temporal artery biopsy; however this procedure can sometimes have high false negative rates because of the known skip lesions associated with this form of arteritis. There is now data to suggest that GCA may be present even with a negative biopsy. 5 In a large study from the United Kingdom specimen length of the biopsy sample was a crucial predictor of positive biopsy results. Specimen length of 0.7 cm or more had a significantly higher rate of positive results than smaller arterial samples. The subject of this case report had an approximately 2 cm arterial segment submitted to Pathology so specimen length was most likely not a factor in a possible false negative biopsy result. Another investigation using Bayesian methodology and data from studies which reported the results of bilateral temporal artery biopsies calculated that the sensitivity of a single temporal artery biopsy is 87.1%, thus there is a greater than 10% false negative result rate. 6 Finally, we are seeing a high rate of negative temporal artery biopsies at our institution in patients who clinically appear to have GCA. The departments of Rheumatology and Neurology are therefore questioning the biopsy technique being done by our vascular surgeons. Because not all patients have classic features of GCA (clinical and laboratory), it is important to be aware of the possibility of new clinical presentations. To date, there is no literature of GCA presenting as PH or any other TAC. Because PH is not common (occurring in 2 per 100,000 individuals) and typically occurs in younger adults (mean onset 34 years), an underlying secondary etiology should be considered especially in those who present with PH after the age 50 years. 7 We cannot state this patient absolutely had PH because the diagnosis depends on complete relief with indomethacin. 1 We can suggest however based on presentation this was a secondary form of a PH-like headache. We also cannot state for certain the case patient had GCA, but this was highly suggested based on her robust response to steroids and this was the diagnosis made by Rheumatology. Corticosteroids in single case reports have been shown to be somewhat effective in PH, but the dramatic response in the present case suggested steroid responsive GCA and thus steroid responsive GCA induced PH. 8 How GCA could present as a TAC can only be hypothesized. Positron emission tomography (PET) studies have demonstrated that PH is associated with significant activation of the contralateral posterior hypothalamus during attacks; thus the hypothalamus is a possible generator for this condition. 9 There is scant literature to support injury to the hypothalamus during the active phase of GCA, however this is felt to be a rare occurrence and thus may explain why we are reporting the first ever case of GCA presenting as PH and why it is not the typical headache presentation for GCA. 10 Of note the patient also had a very high-grade internal carotid artery stenosis on the same side as her headaches but this did not appear to play any role in headache pathogenesis, as after carotid endarterectomy with alleviation of the vessel stenosis there was no change in her headache pattern still requiring prednisone to remain pain free. Based on this case report GCA should be considered in the evaluation of secondary causes of paroxysmal hemicrania; in addition GCA needs to be ruled out in patients who are over the age of 50 years with a new onset TAC.
2016-05-04T20:20:58.661Z
2011-09-28T00:00:00.000
{ "year": 2011, "sha1": "6c88078bb9def85e9412920df877fc909b277c30", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4081/cp.2011.e111", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c88078bb9def85e9412920df877fc909b277c30", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233474792
pes2o/s2orc
v3-fos-license
Partial migration of a maraena whitefish Coregonus maraena population from the River Elbe, Germany The maraena whitefish Coregonus maraena is a threatened anadromous species in the North Sea, which in the past was decimated to near extinction. Since the late 1980s, several re-establishment programs have been implemented in rivers draining into the North Sea, but the scientific basis for sustainable conservation measures is often lacking, since little is known about the biology of this species. In this study, otolith microchemistry of fish ranging from 24.6 to 58.4 cm in total length (median 31.3 cm, SD 8.4 cm) was used to characterize the migration behavior of a reintroduced population of maraena whitefish from the River Elbe, Germany. Our analyses re vealed the presence of 3 different migration patterns: (1) one-time migration into high-salinity habitat (North Sea) within the first year of life (29.6%), (2) multiple migrations between lowand high-salinity habitats starting in the first year of life (14.8%) and (3) permanent residency within low-salinity habitats, a pattern displayed by the majority (55.6%) of sampled individuals. Not only do these results reveal differential migration behavior, but they also indicate that permanent river residency is common in the River Elbe population of C. maraena. The role of the Elbe as both a feeding and a spawning habitat should thus be considered more explicitly in current conservation measures to support recovery of this species. INTRODUCTION A major threat for diadromous fish species is habitat alteration, which includes physical and chemical barriers that block natural migration routes, but also causes the direct loss of freshwater habitat for spaw ning or nursery (de Groot 2002, Limburg & Waldman 2009). Furthermore, as most diadromous species are of commercial importance, fishing is a contributing factor to the decline of many species (Limburg & Waldman 2009). These threats apply to both anadromous species migrating into rivers to spawn (e.g. sal monids), and to catadromous species migrating into the sea to spawn (e.g. eels). The maraena whitefish Coregonus maraena is a sal monid species (Salmonidae) and belongs to the sub family Coregoninae (Nelson et al. 2016). The Core goninae is a di verse taxon from the northern hemisphere, which demonstrates considerable variation both among and within species regarding morphology and behavior, for instance in the number of gill rakers or the migration strategy (e.g. Hansen et al. 1999, Harris et al. 2012, Jacobsen et al. 2012). In the majority of studies and in recent conservation efforts, the anadromous North Sea form of C. ma raena, the subject of the present study, has been designated as North Sea houting C. oxyrinchus, e.g. in the Danish EU LIFE project running from 2005 to 2012 and the EU Habitats Directive (Council Directive 92/43/EEC of 21 May 1992 on the conservation of natural habitats and of wild fauna and flora). However, the nomenclature within the genus Coregonus has led to considerable discussion and confusion. Since the consideration of houting in the North Sea is not limited to the possibly extinct species C. oxy rin chus, but rather to the North Sea population of C. ma raena (Bloch 1779) or a previously undescribed species (Kottelat & Freyhof 2007), we use the scientific name C. maraena instead of C. oxyrin chus following Meh ner et al. (2018). Whether whitefish populations from the North Sea should be considered a separate species from those in the Baltic Sea is still subject to scientific discussions (Dierking et al. 2014, L. F. Jensen et al. 2015, Mehner et al. 2018. However, there is evidence that the extant form of whitefish from the North Sea should be classified as a separate evolutionarily significant unit for conservation purposes, independent of the actual species status (Dier king et al. 2014). In the North Sea, C. maraena was formerly common and widespread throughout the Wadden Sea region (Duncker & Ladiges 1960, Jensen et al. 2003. In the 20 th century, anthropogenic activities such as river regulations including the building of dykes, groins and sluices, as well as pollution (Hansen et al. 1999, Kammerad 2001b, Jensen et al. 2003, caused migration barriers and habitat loss, including the deterioration or even elimination of spawning grounds (Grøn 1987, Kammerad 2001b) and consequently almost led to the extinction of C. maraena (Hansen et al. 1999, Jensen et al. 2003. In the River Elbe drainage system, which includes one of the largest European estuaries (Pihl et al. 2002), C. maraena fisheries with annual yields of up to 23 t were supported until the early 20 th century (Kammerad 2001a,b), but then the population collapsed due to reasons mentioned above and this species became locally extirpated. Similarly, other ana dromous species have been negatively affected (e.g. Atlantic salmon Salmo salar, twaite shad Alosa fallax, river lamprey Lampetra fluviatilis) or have been locally extirpated, e.g. sturgeon Acipenser sturio and allis shad Alosa alosa (Thiel & Thiel 2015). In the North Sea, only 2 small remnant populations of C. ma raena persisted in the Danish rivers Vidå and Ribe Å (Jensen et al. 2003). Currently, C. maraena is classified as 'Vulnerable' (VU A2cd) in the IUCN Red List (Freyhof 2011), and as 'threatened' and/or 'declining' by the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR Convention), and it is a priority species listed in Annexes II and IV of the Habitats Directive. This means that special areas of conservation are re quired for the conservation of the species, and the species is in need of strict protection (Svendsen et al. 2018). The first programs for re-establishment of C. ma raena in formerly inhabited rivers in the North Sea were set up for several Danish rivers and the German Eider−Treene river system in the late 1980s (Kammerad 2001b, Jensen et al. 2003, Jepsen et al. 2012, followed by the Rivers Elbe and Rhine (Kammerad 2001b, Borcherding et al. 2010, Dierking et al. 2014. Since 1997, the Elbe tributaries Seeve, Este, Oste, Luhe and Aue (Lühe) have each been stocked annually with 10 000−15 000 fingerlings of C. maraena (2−3 cm long) in spring, i.e. a few weeks after hatching in April (see www.schnaepel.de/). Nevertheless, natural reproduction in the River Elbe in the recent past has occurred only on a very low level (Thiel & Thiel 2015). To date, these programs have relied heavily on stocking, as currently too little is known about the environmental improvements needed for a natural recovery of these populations (Svendsen et al. 2018). At present, habitat conditions generally con sidered important for diadromous species, such as water quality and passability, have improved in several rivers (de Groot & Nijssen 1997, Borcherding et al. 2010), in principle paving a possible path to re covery in the future. Investigations of C. maraena in the River Vidå showed that adult fish entered the river mostly in October and arrived at putative spawning areas in November (Hertz et al. 2019). Downstream migration started predominantly in December, and the fish entered the Wadden Sea in March and April . Other studies confirm spawning migrations into the rivers in early winter (Jepsen et al. 2012) and a return to the sea in spring (Jensen et al. 2003). However, other studies suggest a certain degree of intraspecific flexibility, both in terms of onset of migration and time spent in the river after spawning ) as well as in terms of dispersal migration behavior, including non-migrating indi viduals and migrations at larger size and higher age (Borcherding et al. 2008). To date, no studies on the migration behavior of C. maraena have been carried out in the Elbe. Otolith microchemistry has become an important tool for the identification of migratory behavior, par-ticularly for diadromous species (Walther & Limburg 2012). The concentration of strontium (Sr) and barium (Ba) in ambient water, given as element:calcium (Ca) ratios, varies with water salinity; the Sr:Ca ratio is generally positively correlated and the Ba:Ca ratio is generally negatively correlated with increasing salinity (Tabouret et al. 2010). Accordingly, Sr:Ca ratios are usually negatively correlated to Ba:Ca ratios. These elements are incorporated into the calcium carbonate matrix of hard structures such as otoliths by substituting for calcium (Kalish 1990). As otoliths are chemically inert (Campana & Neilson 1985), different concentrations of Sr and Ba in freshand saltwater are reflected in the chemical composition of all otolith growth zones just like spatial fingerprints. Element:Ca ratios measured along a transect from the nucleus-area to the edge of an otolith are therefore a suitable measure to reveal the migration history of diadromous fishes. This study had the overarching goal to help close the knowledge gap with respect to habitat use and migration behavior of the C. maraena population in the Ri ver Elbe− Wadden Sea system. Specifically, our aims were (1) to characterize the use of low-versus highsalinity habitats (and thus migrations between the 2 habitats) of individuals from this population, (2) to assess the possible presence and prevalence of differences in migration behavior among individuals, and (3) to assess the role of possible underlying factors affecting migration behavior, such as ontogenetic changes and sex-specific differences. From an applied conservation perspective, this information can help resource managers to understand habitat requirements of this threatened species in the River Elbe− Wadden Sea system. Study area The study area was located in the lower River Elbe between Hamburg and Cuxhaven in northern Germany (Fig. 1). The river section between Geesthacht and Cuxhaven, where the Elbe discharges into the North Sea, is tidal and ex hibits a salinity gradient, ranging from almost 0 to around 32 PSU (Boehlich & Strotmann 2008). Fishes from this lower section can freely migrate between freshwater and saltwater as there is no migration barrier. Sampling and otolith preparation We obtained 27 adult/subadult specimens of Coregonus maraena from bycatch of professional fisheries in the lower Elbe in June/July 2012 and February/ March 2013. Nine and 17 individuals were collected at 2 catch locations (Sites 1 and 2, respectively) in the freshwater section close to Hamburg, and 1 individual was caught in the polyhaline section (Site 3) of the Elbe close to Cuxhaven (Fig. 1). All fish were frozen after capture. One additional individual that was hatched and raised in a freshwater aquaculture farm (Stiller 2010). Sampling locations were in the freshwater section close to Hamburg, where 9 (Site 1) and 17 (Site 2) Coregonus maraena individuals were caught. One fish was caught in the polyhaline section of the River Elbe close to Cuxhaven (Site 3). The black bar marks the only migration barrier (weir) in the Elbe estuary (BiMES Binnenfischerei, Leezen, Germany), and therefore experienced exclusively pure freshwater conditions over its lifetime, was obtained in July 2013 and served as a control. After defrosting, total length (TL) and sex of the individuals were determined. Opercula were re moved for subsequent age determination. Sagittal otoliths of all individuals were extracted, cleaned with distilled water and air dried. One randomly chosen otolith per individual was used for otolith microchemical analyses. Specifically, thin sections (0.5 mm) of the otoliths were cut and glued to glass slides with Crystalbond Mounting Wax (Buehler; http://www. buehler.com). These sections were then ground manually using lapping papers of 30, 12 and 3 μm, consecutively, until the core area could be detected under a light microscope (Leica DM 750). Finally, otoliths were polished using aluminum paste (ALPHA MICROPOLISH 2, grain size 0.3 μm). Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) analysis of otoliths Microchemical analyses of otoliths were performed at the Department of Geosciences of Bremen University, Germany, using a NewWave UP193ss solid-state laser with 193 nm wavelength coupled to a Thermo Element2 ICP mass spectrometer. Transects were set from the nucleus-area to the edge of each otolith. Like some other salmonids (Kalish 1990), C. maraena possesses not just 1 but 2 core areas in its otoliths. The midpoint between these 2 areas was defined as the nucleus of the otolith and set as the starting point of ablation (Fig. 2). Prior to ablation, the blank signal was recorded for 20 s. The sample surface was pre-ablated with a spotsize of 75 μm and a scan speed of 100 μm s −1 . For transect ablation, a 50 μm spot-size with 3 μm s −1 scan speed and a laser pulse rate of 10 Hz was used. Irradiance was approximately 1 GW cm −1 . Flow rates of the carrier gas (helium) and the make-up gas (argon) were about 0.7 and 0.9 l min −1 , respectively. The in-tensities of the isotopes 88 Sr, 43 Ca and 143 Ba were measured. A synthetic glass (NIST 610; Na tional Institute of Standards and Technology, Gai thersburg, MD) was used as an external calibration standard and was analyzed after every second transect. A correction for the isobaric interference of double-charged 86 Sr on 43 Ca was performed based on analyses of the carbonate reference standard MACS-3 (Jochum et al. 2012). Analytical precision and accuracy were assessed by analyzing a pressed pellet of the otolith reference standard NIES CRM No.22 (Yoshinaga et al. 2000) on each measurement day. Precision was around 5% and accuracy was better than 10%. As a result of the automated sawing during the otolith preparation process, some otoliths were not perfectly cut in their core region. Thus, the first 150 μm of each measured otolith transect was exclu ded from subsequent analysis. Differentiation between habitats Different salinity regimes were identified from the frequency distribution of all Sr:Ca ratios measured using an approach similar to that of Daverat et al. (2011) and Magath et al. (2013). This approach is theoretically based on the expected multimodality of the frequency because C. maraena mainly uses 2 habitats of very different salinity (Wadden Sea as feeding habitat and freshwater for spawning and early life stages, e.g. Jensen et al. 2003). In a first step, a multi modal frequency distribution was plotted with singly measured Sr:Ca ratios of all individuals available. This distribution revealed 1 strong maximum at low Sr:Ca ratios and 2 weak maxima at higher Sr:Ca ratios. In the second step, the first strong maximum was separated from the 2 following weak maxima according to the expected main habitats in freshwater and Wadden Sea, which resulted in the assignment of a lowsalinity (Sr:Ca ratios ≤ 0−2.1 mmol mol −1 ) and a highsalinity regime (Sr:Ca ratios > 2.1 mmol mol −1 , Fig. 3). The established salinity regimes agree well with findings of the closely related C. lavaretus from the Baltic Sea, where sea-spawning individuals had Sr:Ca values > 2.0 mmol mol −1 (Rohtla et al. 2017). Nevertheless, to further validate the approach, we used Sr:Ca ratios of fish from known salinity origins as reference. The freshwater reference was given by both the Sr:Ca ratios (all measurements) of the freshwater-reared individual which did not originate from the Elbe estuary ('freshwater control') and the last section (last 6 Sr:Ca measurements) of ablated transects of 26 individuals caught in the freshwater part 266 Fig. 2. Thin section of a Coregonus maraena otolith illustrating 2 core areas and the midpoint (black ellipse) defined as the otolith core from which transects (black arrow) were ablated of the Elbe estuary, 'freshwater-caught fish', see Fig. 1). Similarly, the last 6 Sr:Ca measurements of the ablated transect of the 1 individual caught in the lower reaches of the Elbe estuary (i.e. in polyhaline waters at the time of capture, 'polyhaline watercaught fish', see Fig. 1) provided the reference values for habitat of higher salinity. The calculated mean (± SD) value for freshwatercaught fish (Sr:Ca = 1.07 ± 0.44 mmol mol −1 ) was slightly above the mean value of the freshwater control (0.78 ± 0.07 mmol mol −1 ) and nearly equaled the strong first peak of the frequency distribution ( Fig. 3), indicating that the defined low-salinity regime reflects limnic to slightly brackish waters. In contrast, the average Sr:Ca ratio of the polyhaline water-caught fish (4.48 ± 1.03 mmol mol −1 ) was close to the third peak, thus the high-salinity regime likely reflected medium brackish to euhaline waters. An inverse relationship between Sr:Ca and Ba:Ca ratios in the otoliths was observed (rho = −0.265, p < 0.001), which is well-known from studies on habitat use of migratory fishes along a salinity gradient (Walther et al. 2011). Determination of age and annuli For age determination, annuli were counted along each ablated otolith transect from the LA-ICP-MS analysis. For this purpose, these sections were viewed under a light microscope (Leica DM 750) at 40−100× magnification using transmitted light, and possible ring structures were examined. The re sults were verified by examining opercula, as annuli can be identified more precisely in these hard structures compared to otoliths (Gerson 2013). Data analysis and statistics Temporal habitat use and potential movements of fish between the specified habitats (low-and highsalinity regimes) were detected by measured Sr:Ca ratios along the ablation transects. Combined with age chronologies along the transects, this allowed for the reconstruction of individual migration life histories. Specific habitat uses and movements determined in this way were grouped into categories based on similarity (hereafter referred to as 'migration patterns'). Potential differences in the prevalence of these patterns between sexes and with age were then assessed using Fisher's exact test. Shapiro-Wilk tests were performed to test for normality of data. Because all data were not normally distributed, group sizes were unequal and sample size of subsets was small (< 9), non-parametric tests were used for further comparisons (e.g. Raine et al. 2020). Correlations between Sr:Ca and Ba:Ca ratios as well as age and TL were tested using Spearman's rho statistic. Comparisons of fish size among sexes, age groups and migration strategies were conducted using the Mann-Whitney U-test. All statistical analyses were performed using R version 3.4.0 (R Core Team 2017). Migration patterns The otolith analyses revealed variability in migration behavior of maraena whitefish. Individuals either showed temporal habitat use and movements between the specified habitats (low-salinity regime including freshwater and slightly brackish water ha bitats and high-salinity regime including medium brackish to euhaline waters) or stayed permanently within the low-salinity regime. The signal profiles resulting from Sr:Ca measurements along the ablation transects were categorized into 3 patterns, as follows. Pattern 1 was characterized by a one-time temporary increase in Sr:Ca ratios above the threshold of 2.1 mmol mol −1 . Eight individuals (29.6%) were assigned to this pattern. Of these, 7 showed a subsequent decrease in Sr:Ca ratios below the threshold along the transect (Fig. 4a), indicating one migration from a low-to a high-salinity regime, followed by a return into the low-salinity environment. Three individuals showed slight modifications of the underlying signal profile, with 2 individuals (Nos. 2 and 7 in Fig. S1 in the Supplement at www.int-res.com/ articles/ suppl/ n044 p263_supp.pdf) that already showed high Sr:Ca ratios at the beginning of the measurements, and 1 individual (No. 8 in Fig. S1) for which Sr:Ca ratios remained high and did not decrease to low-salinity values again. Median age of individuals exhibiting pattern 1 was 2 yr, with an age span of 1−4 yr. There was a strong correlation be tween Sr:Ca and Ba:Ca ratios (rho = −0.745, p < 0.001) of all associated individuals, indicating an inverse relationship. Pattern 2 was characterized by repeated increases in Sr:Ca ratios above the predefined threshold along the transect (Fig. 4b), indicating several migrations be tween the 2 environments. Four individuals (14.8%) re presented this pattern, with a me dian age of 3 yr (range 3−7 yr). An in verse relationship between Sr:Ca and Ba:Ca ratios was also detected here (rho = −0.638, p < 0.001). For those fish that were assigned to pattern 1 or 2 and which thus showed at least 1 seaward migration, this behavior occurred early in life (at the age of 0+). Pattern 3 was defined as a long-term stay, i.e. residency, in the low-salinity regime with no habitat change, and was characterized by a signal curve that remained constantly below the predefined threshold (Fig. 4c). The majority of fish (15/27; 55.6%) displayed this pattern, all of which were sampled in the freshwater section close to Hamburg (see Fig. 1, Sites 1 and 2). Median age of individuals was 2 yr (range 1−8 yr). Unlike in patterns 1 and 2, the value of the correlation coefficient (rho = 0.520, p < 0.001) indicated a positive correlation between Sr:Ca and Ba:Ca signal profiles. Migratory behavior and sex There was no significant difference between males and females in the relative proportions of migratory (pattern 1 or 2) versus resident (pattern 3) migration strategies, with 7 out of 15 assessed females (47%) and 5 out of 12 males (42%) representing the migratory strategy (Fisher's exact test, p > 0.05). Migratory behavior and fish size As expected, TL was highly positively correlated with age (Spearman's rho = 0.69, p < 0.001), so to detect differences in body length between migrating (patterns 1 and 2) and non-migrating (pattern 3) individuals, age needed to be considered. As the majority of individuals had an age of 2 or 3 yr (N = 22), the subsequent analyses considered only these 2 age groups (AGs). There was no significant difference in the TL between sexes for either AG 2 or AG 3, (U = 34, p > 0.05 for AG 2, U = 5, p > 0.05 for AG 3), so males and females were combined. For AG 2, migrating individuals showed significantly higher TL than resident individuals (U = 7, p < 0.05) with a median length of 38.1 cm (range 28.9−40.8 cm) compared to 28.8 cm (range 25.2−31.9 cm) in resident fish (Fig. 5). For AG 3, there was also a trend towards higher body size in migrating individuals (median 37.2 cm, range 37.0−41.6 cm) compared to residents (median 32.3 cm, range 30.9−44.2 cm, Fig. 5), but this difference was not statistically significant (U = 3, p > 0.05). DISCUSSION After maraena whitefish had almost disappeared from the North Sea as a result of human disturbance, re-introduction programs have aimed to ensure the return of this fish species to its formerly native range including the River Elbe, and ultimately to the establishment of self-sustaining (i.e. supported by natural reproduction) populations. To date, however, these programs have relied on continuous restocking, as appropriate management measures supporting natural population replenishment are still lacking due to the poor knowledge of the biology of this endangered species (Svendsen et al. 2018). The present study addressed this issue by investigating the habitat use and migration behavior of the reintroduced population of maraena whitefish in the Elbe for the first time. Using otolith microchemistry, 2 fundamentally different migration strategies were found and expressed among individuals ranging from 24.6 to 58.4 cm in TL (median 31.3 cm, SD 8.4 cm) and an age of 1 to 8 yr in this population: migration between different salinity regimes or permanent residency in low salinity habitat. This phenomenon, which is known as partial migration of a population (Chapman et al. 2012b), can offer advantages in terms of adaptation to variable environmental conditions, but it also has specific conservation implications, which are discussed further in Section 4.5. Migratory patterns The finding of differential migration strategies, with 44% of the population showing migratory and 56% exhibiting permanent resident behavior, indicates a high degree of intraspecific variation within the Elbe population of maraena whitefish. According to Jensen et al. (2003), maraena whitefish from the North Sea reach sexual maturity at the age of 2−3 yr (male) or 3−4 yr (female). Applied to this study, most sampled fish that showed permanent resident behavior were likely to be mature already (66.67%). The mean TL of those resident fish was 32.3 cm, ranging from 25.2 to 58.4 cm, and ages ranged from 1 to 8 yr. Borcherding et al. (2008) ob served the migration behavior of maraena whitefish from the River Rhine and found that some of the sampled fish stayed in freshwater for a relatively long time and migrated when they had reached a TL of about 30−35 cm. Therefore, it cannot be excluded that several of the 15 individuals that presented pattern 3 in the present study (of which 7 were smaller than 30 cm) would also have migrated at a later point in time if they had reached a greater body length. However, a larger proportion already migrated in the juvenile stage at a TL of 35−40 mm. L. F. also discussed an observation of an earlier study that also showed that juvenile Coregonus maraena of such a small size are already present in saltwater and even actively ingest food there, which indicates improved hyperosmotic tolerance in that early stage of life. In line with this, L. F. compared the salinity tolerance of larvae and juvenile C. maraena and found that hyperosmotic tolerance increased with increasing body length. The juvenile C. maraena used in their study had a TL of 33− 50 mm, indicating that those small fish develop the ability to hypo-osmoregulate and could migrate to higher salinities. In this context, it can be assumed that the non-migrating individuals in our study had already reached a tolerable size for migration long ago; and yet they did not do so. This confirms earlier reports showing that populations of migratory fishes can have high migratory plasticity (e.g. Cucherousset et al. 2005, Miller et al. 2010, Magath et al. 2013, some including resident individuals (e.g. Jonsson & Jonsson 1993, Chapman et al. 2012b, Kendall et al. 2015. The partial migration of C. maraena shown here is also known from other coregonids in the northern hemisphere, e.g. C. nasus, C. clupeaformis, C. sardinella and Stenodus leucichthys (Tallman et al. 2002, Howland et al. 2009, Harris et al. 2012. Partial migration to brackish water has also been documented in Baltic Sea populations of European whitefish C. lavaretus, with some populations migrating to spawn in coastal rivers and streams, and others spawning in shallow, low-salinity bays in the northern Baltic Sea (Sõrmus & Turovski 2003). Moreover, the population from the River Tornionjoki (northern Baltic Sea) shows differential migration behavior in the marine phase, with part of the population staying in the northern Baltic and another part migrating further south to higher-salinity waters (Jokikokko et al. 2018). Similarly to this study, migratory plasticity was also found in the C. maraena population from the Danish River Vidå ), but here migration behavior mainly differed between relatively large early-and late-migrating individuals (mean ± SD length 42.7 ± 6.4 cm) and was not characterized by a high degree of residency as identified in the present stu dy. Al though in the River Rhine, the majority of analyzed C. maraena were non-migrating (Borcherding et al. 2008), the situation was different from that in the Elbe, as migration barriers (large dams) in the Rhine delta possibly affected the natural migration behavior of C. maraena. In contrast, the high proportion of permanent residency in the Elbe, despite the ab sence of migration barriers, was exceptional and un expected in the present study. The development of a non-migrating tendency within an anadromous species may result from several and complex reasons (reviewed by Chapman et al. 2012a) that need to be considered in the context of the complete riverine and estuarine ecosystem. Gross (1987) argued that an anadromous lifestyle is useful when the benefits of more nutritious food in the marine habitat can offset the costs of a long migration into that habitat. Migration behavior reflects a balance between the benefits and cost of migrations that affects the fitness of fish (Jonsson & Jonsson 1993). The high proportion of permanent residency (i.e. feeding and spawning in the same habitat) in the Elbe in the present study would thus point to comparable net benefits for C. maraena. In principle, migrations may be motivated by the need to spawn, feed and seek refuge from predators, but human activities may also influence the dynamics of fish migration (Chapman et al. 2012b). In the case of C. maraena, migration to feed in the sea but return to spawn is most likely, based on what is known about the life history of the species . However, as the number of spawning individuals was not investigated, it is not possible to say conclusively whether or how many spawning migrations were carried out here. The underlying data suggest that the reasons for migration were already effective very early in life. Specifically, if an individual did not leave the lowsalinity environment in the first year of life, it also did not do so at a later point in time. In contrast, some individuals migrated during the first year of life, returned into the low-salinity environment and became resident thereafter (pattern 1). With a medium age of 2 yr, these individuals were relatively young. Therefore, it is possible that some of them would have migrated again later; also indicated by pattern 2, which is characterized by multiple migrations and older individuals (median age 3 yr but including 1 individual of 7 yr) than in pattern 1. Some of them returned to low-salinity environment in the same year of their hatch. This is well before they reach sexual maturity, re ported at the age of 2−3 yr (males) or 3−4 yr (females) (Jensen et al. 2003). In these cases, the migration thus does not represent a spawning migration. Immature maraena white fish have been monitored in the lower sections of Danish rivers during winter (Jensen et al. 2003). This has also been observed in mature and immature individuals of closely related anadromous salmonids, especially within the genus Salvelinus, such as Arctic charr Salvelinus alpinus, which can return to rivers to overwinter (e.g. Klemetsen et al. 2003, A. J. Jensen et al. 2015. Similar behavior could also be a possible explanation for the early return of the fish examined here. The fact that the individuals assigned to patterns 1 and 2 were already caught in the Elbe in June/ July also raises questions regarding contrasting temporal regulation of migration behavior among river systems, since spawning migrations have been observed in late fall in the Rivers Rhine (Borcherding et al. 2014) and Vidå (Hertz et al. 2019). These observations suggest that living (including feeding) conditions in the Elbe may have been sufficient for C. maraena, possibly because of the low population density favoring residency (Jonsson & Jonsson 1993). The high residency rate also demon-strates that the river system of the Elbe is likely to be of major importance for C. maraena not only as spawning but also as feeding habitat. So far, it is unknown whether the reasons for the observed partial migration have an underlying phenotypic or genotypic origin. There is evidence that different genotypes as well as hybridizations of coregonids (C. maraena and non-migratory lake whitefish C. lavaretus) coexist in the Elbe (Dierking et al. 2014). C. maraena physiologically differs from C. lavaretus in terms of osmoregulation (Hertz et al. 2019). This is reflected in the ability of C. maraena to tolerate high salinities and undertake migrations into the North Sea, which C. lavaretus is not able to do (Grøn 1987). Studies elucidating a possible genetic basis for differential migration behavior (see e.g. Hess et al. 2016) would be an important future direction to pursue, but such an analysis was beyond the scope of the present study. Irregularities in migration patterns Some otoliths in the present study showed high Sr:Ca ratios, above the defined threshold separating the low-and high-salinity regime, in the innermost (i.e. earliest observed) measurements (Fig. S1, Nos. 2, 7 and 10). The lack of information for the earliest life stage provided by the Sr:Ca signal profile (the first 150 μm were removed from each profile) combined with the tendency of a relatively fast downstream migration as observed for stocked juvenile fish of 20−60 mm TL in the Rhine system (Borcherding et al. 2006) could be a significant factor in this observation. Since there is no evidence that reproduction of C. maraena can occur in high-salinity habitats in the North Sea, it can be assumed that these individuals also initially lived in the low-salinity regime. According to Jensen et al. (2003), the physiology of maraena whitefish changes when an individual reaches a total length of 30−40 mm, so it can withstand the move to high-salinity waters from that point on. In contrast, 2 other otoliths showed high Sr:Ca ratios at the end of the profiles (Fig. S1, Nos. 8 and 9). A straightforward reason here may be that these individuals had only recently migrated into the River Elbe, where they were captured and had not spent enough time in low-salinity waters for this to be reflected in the otoliths as correspondingly low Sr:Ca ratios. Such a time-delayed response of otolith Sr:Ca ratios to Sr variation in the ambient water has been noted in previous studies (e.g. Yokouchi et al. 2011). The time required to establish the state of equilibrium be tween Sr content of the otoliths and the surrounding water may not have been reached in these cases. According to Elsdon & Gillanders (2005), it may take 20 d before an Sr signal, corresponding to a new environment, is fully reflected in otoliths. This time-delayed response may also have biased the data points of reference fish used to validate the differentiation among salinity regimes. Ideally, all reference fish should have originated from salinities under controlled conditions. However, these fish were not available, and results of the multimodal frequency distribution as well as similar findings of Rohtla et al. (2017) suggest that the use of the last 6 data points worked quite well. The unexpected positive correlation between Sr:Ca and Ba:Ca ratios in resident individuals may be explained by the small range of Sr:Ca ratios close to the lower bound of measured ratios (0.13−2.09 mmol mol −1 ) compared to the range found in migratory individuals (0.34−11.04 mmol mol −1 ), which could pre vent the identification of a negative correlation. Furthermore, slight variations in the Sr:Ca ratios within the low-salinity regime probably indicate small-scale migrations within the river system, changes in water temperature, food availability or age-related changes in storage rates (Campana 1999, Secor & Rooker 2000, which alter the uptake and incorporation of Sr into fish otoliths (Sadovy & Severin 1992, Bath et al. 2000. However, these effects are weaker than the effects of water salinity on Sr incorporation into otoliths (Marohn et al. 2009(Marohn et al. , 2011 and are thus expected to be of minor importance. The salinity in the Elbe estuary changes over the tidal cycle, as tidal currents move a water body between 15 and 20 km down-and upstream twice a day (Bergemann 1995). This could cause slight variations in salinity at sampling locations and it cannot be ruled out that these variations are also reflected in otolith elemental composition to a minor extent. Ba:Ca ratios varied strongly within the measured profiles of resident individuals. Ba uptake into otoliths is mainly driven by its availability in surrounding water (Hüssy et al. 2020) but can also be affected by other environmental factors. Bath et al. (2000) found that e.g. temperature can significantly influence the Sr:Ca ratio of marine fish but has no effect on the incorporation of Ba into the otoliths. However, other studies reported significant temperature effects on Ba incorporation into otoliths, e.g. for black bream Acanthopagrus butcheri (Elsdon & Gillanders 2002) and European eel Anguilla anguilla (Marohn et al. 2011), suggesting species-specific dif-ferences of temperature effects on the incorporation of Ba into otoliths. To determine if and to what extent temperature or other potential factors (e.g. growth; Walther et al. 2010) are responsible for the observed Ba fluctuations is notoriously complex and beyond the scope of this study. Migratory behavior in relation to sex The lack of differences in migration behavior between sexes in our study contrasted with strong differences observed in many other partially migratory salmonid fish species (Jonsson & Jonsson 1993, Chapman et al. 2012a, Dodson et al. 2013). Among salmonids, females typically dominate the migratory contingent, e.g. in brown trout Salmo trutta and Atlantic salmon S. salar (Jonsson & Jonsson 1993, Klemetsen et al. 2003. A reason for female-biased migration may lie in the strong correlation of fecundity and body size; migrating to the highly productive marine environment may increase reproductive success through better growth to a greater extent for females than for males (Gross 1987, Jonsson & Jonsson 1993, Klemetsen et al. 2003. The similar observations for males and females in our study would be in line with the observed high proportion of permanent residency that pointed to sufficient feeding conditions within the Elbe. Migratory behavior in relation to fish size Body size may be an important trait that could have an impact on whether to migrate or not (Chapman et al. 2012a, Dodson et al. 2013. Although information on body size before the time of first migration to a high-salinity regime was not available in the present study, body size of older individuals indicated that migrants were larger than residents, with significant differences in AG 2 and the same trend (but without significant differences) in AG 3. This is consistent with previous observations for different species including coregonids (Mehner & Kasprzak 2011;reviews by Chapman et al. 2012a andDodson et al. 2013). In temperate waters, the sea generally offers richer feeding grounds than the freshwater environment. This allows migrants to have a higher growth rate compared to resident individuals, which results in larger body size at the same age (Gross 1987, Jonsson & Jonsson 1993). Factors such as low population density and good feeding opportunities in the freshwater system that favor residency over migratory behavior (Jonsson & Jonsson 1993) could also have led to the less pronounced differences in body size between migrants and residents of C. maraena in the Elbe. Implication for conservation measures The survival of endangered species directly depends on the availability of suitable habitat (Cooke et al. 2012, Arthington et al. 2016. Understanding habitat needs of such species is therefore critical information to support conservation efforts. A complex stock structure, which is characterized by both residency and migration behavior, emphasizes the need for a differentiated approach to species-specific needs. Resident individuals do not only use the habitat for spawning but also as a feeding habitat throughout the year. The entire life cycle takes place in a relatively small geographical area, which is therefore of crucial importance for the survival of these individuals. Migratory individuals, on the other hand, also need marine habitats and rely in particular on open migration routes to switch between habitats to complete their life cycle. Consequently, the present study identifies the River Elbe system as a crucial area that is used year-round by an important proportion of the population, and is thus relevant as a feeding, spawning and wintering habitat as well as a migration route for maraena whitefish. From this, a need for year-round protection of the riverine habitat can be derived. However, the Elbe is exposed to strong human impacts such as canalization, industry and fisheries (Kammerad 2001b, Thiel 2011. Commercial shipping in particular is of great importance (Boehlich & Strotmann 2008). In this context, deepening of the navigation channel has considerably altered the river, which has not only affected the tidal dynamics of the river, but also its biota, including fishes (Thiel 2011). Further investigations of the species-specific habitat use within the Elbe and the impacts of anthropogenic activities on the quality of these habitats would thus have strong potential to support effective management strategies and improve the protection of this priority fish species in the context of the Habitats Directive. Conclusions The partial migration within the C. maraena population in the Elbe estuary observed here represents an example of phenotypic plasticity in a fish that pos-sibly increases fitness under variable environmental conditions (Jonsson & Jonsson 1993). The occurrence of migrations between the River Elbe and the Wadden Sea, but also a substantial proportion of permanent or at least long-term freshwater habitat use, provides new knowledge to inform conservation decisions. Specifically, it highlights the importance of the Elbe as both feeding and spawning habitat, but also the importance of maintaining migration corridors and connectivity within the system.
2021-05-02T10:55:02.916Z
2021-03-25T00:00:00.000
{ "year": 2021, "sha1": "7f47d8b8d6d5aafb6376122ae12e1baafaae76d5", "oa_license": "CCBY", "oa_url": "https://www.int-res.com/articles/esr2021/44/n044p263.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7f47d8b8d6d5aafb6376122ae12e1baafaae76d5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
231905040
pes2o/s2orc
v3-fos-license
Inducing secondary metabolite production of Aspergillus sydowii through microbial co-culture with Bacillus subtilis Background The co-culture strategy which mimics natural ecology by constructing an artificial microbial community is a useful tool to activate the biosynthetic gene clusters to generate new metabolites. However, the conventional method to study the co-culture is to isolate and purify compounds separated by HPLC, which is inefficient and time-consuming. Furthermore, the overall changes in the metabolite profile cannot be well characterized. Results A new approach which integrates computational programs, MS-DIAL, MS-FINDER and web-based tools including GNPS and MetaboAnalyst, was developed to analyze and identify the metabolites of the co-culture of Aspergillus sydowii and Bacillus subtilis. A total of 25 newly biosynthesized metabolites were detected only in co-culture. The structures of the newly synthesized metabolites were elucidated, four of which were identified as novel compounds by the new approach. The accuracy of the new approach was confirmed by purification and NMR data analysis of 7 newly biosynthesized metabolites. The bioassay of newly synthesized metabolites showed that four of the compounds exhibited different degrees of PTP1b inhibitory activity, and compound N2 had the strongest inhibition activity with an IC50 value of 7.967 μM. Conclusions Co-culture led to global changes of the metabolite profile and is an effective way to induce the biosynthesis of novel natural products. The new approach in this study is one of the effective and relatively accurate methods to characterize the changes of metabolite profiles and to identify novel compounds in co-culture systems. Background Natural products (NPs) are an important historical source of many useful drugs and other chemical agents, of which microbial secondary metabolites represent a significant part [1]. Numerous novel secondary metabolites have been isolated from marine fungi and 70-80% of them have good biological activities such as anti-cancer, anti-bacteria, anti-parasite and free-radical scavenging. Moreover, some compounds have been marketed as commercial drugs through clinical research [2]. However, with the progress of scientific research, researchers have found that repeated discoveries of known metabolites are increasing. The fact that the biosynthetic potential has eluded is mostly explained by the observation that many genes are transcriptionally silent under standard culture conditions, causing their products inaccessible [3]. Moreover, analyses of microbial whole genome sequences indicate that microbes contain many thousands of biosynthetic gene clusters, which encode a plethora of compounds that are not identified when cultured under standard laboratory conditions [4]. To overcome these impasses, several approaches have been developed, such as nontargeted metabolic engineering, epigenetic modification, and chemical synthesis. Among such approaches, the co-culture method draws increasing attention to stimulate the production of novel natural products. Co-culture of different microorganisms can imitate the natural microbial environment, and the silent biosynthetic gene clusters are transcriptionally activated by environmental stimuli [5]. The chemical cues released by other microbes can cause various defense responses, including the changes of mycelial morphology, synthesis of diverse secondary metabolites, and production of extracellular enzymes. Indeed, these activated defensive metabolites can act as chemical cues that can trigger a series of transcriptional activation [6]. Recently, there have been many successful studies on the use of microbial co-culture to induce new secondary metabolites. For instance, Zuck et al. demonstrated that co-culture of Aspergillus fumigatus and Streptomyces peucetius induced the production of four formyl xanthine analogs that were not generated in pure culture, of which two were compounds with novel structures, and compound 2 showed significant inhibitory activity against several cell lines [7]. Moreover, Wu et al. co-cultured Bacillus amyloliquefaciens and Trichoderma asperellum and found that the production of antibacterial substances was significantly higher than that in pure culture. When the inoculation ratio was 1:1, the production of specific amino acids was improved [8]. Therefore, co-culture is regarded as a useful research method for effectively inducing the production of metabolites. Aspergillus sydowii can produce various secondary metabolites which are increasingly utilized in pharmaceuticals, food and chemicals, such as endoglucanases with industrial application value, enzymes with inhibitory activity against protein tyrosine phosphatase A of M. tuberculosis, sesquiterpenoids with antimicrobial and antiviral activities, and alkaloids with activity against S. aureus and S. epidermidis [9]. However, the analysis of the whole genome of A. sydowii revealed a number of genes for the biosynthesis of compounds that have not been observed when cultured under standard conditions [10]. The previous study has showed that the addition of 5-azacytidine, an epigenetic modifier, to the broth of A. sydowii induced the production of the (S)-(+)-sydonol which potentiated the insulin-stimulated glucose consumption, suggesting that the metabolites of A. sydowii obtained through nontargeted metabolic engineering might be developed into antidiabetic agents [11]. Recently, the enzymes from the protein tyrosine phosphatases (PTPs) superfamily are emerging as potential new drug targets for type 2 diabetes. For example, protein tyrosine phosphatase 1b (PTP1b) [12] is a negative regulator of insulin action in the insulin receptor signaling pathway, SH2-containing protein tyrosine phosphatase-1 (SHP1) is a negative regulator in signaling pathways, which regulates glucose homeostasis through the modulation of insulin signaling in liver and muscle [13], and Leukocyte common antigen (CD45) is the receptor for some ligands, which can regulate the recruitment of SHP-1 [14]. Some microbial metabolites, such as varic acid analogues from fungi, showed selective inhibitory activities against PTPs [15]. Thus, the activities of metabolites of co-culture on the PTPs and their potential to be used in diabetes treatment is worth expecting. The conventional method for the study of metabolites in co-culture systems is to separate and purify the compounds corresponding to each peak newly detected in HPLC, and to analyze their structures by means of MS, UV, IR, and NMR [16]. However, the conventional method is inefficient and time-consuming, and only products with high contents can be identified. The elucidation of trace newly biosynthesized metabolites in co-culture systems is still challenging. Furthermore, the overall changes of metabolite profiles during the co-culture cannot be displayed. In these years, metabolomics, which is mainly aided by the advances in analytical technologies, such as high-resolution mass spectrometry (MS), is primarily associated with comprehensive analysis of small-molecule compounds that can be found in biological samples. Some useful tools, for instance, computation-based MS-DIAL [17] and MS-FINDER [18] programs, and the web-based global natural product social molecular network (GNPS) [19] have also been developed to predict the structure of the metabolites. However, in most cases, only one single tool was used in structure predictions, and the accuracy of the predictions is still a concern. In addition, a web-based tool, MetaboAnalyst, which combined multivariate statistics to identify spectral features that are statistically different between two (or more) different sample populations, is useful in the statistical and functional analysis of metabolomic data [20]. It has been reported that these tools have been used in the analysis of the metabolite profile of microorganisms regulated by epigenetics [21]. However, to our best knowledge, there has been no report on the application of MetaboAnalyst in the co-culture of microbes. In the present study, the fungus A. sydowii was co-cultured with the bacterium B. subtilis, and an integrated metabolomics approach, composed of MetaboAnalyst, MS-DIAL, MS-FINDER, and GNPS was developed to analyze the MS/MS data of the co-culture. The changes in the metabolite profile were characterized, and the newly biosynthesized compounds were identified. The purification and NMR analysis of part of the newly biosynthesized compounds were performed to verify the accuracy of the new approach. The activities of newly biosynthesized compounds against protein tyrosine phosphatases (PTPs) were also evaluated. Microbial interaction induced changes of the metabolite profile The co-culture of twenty microorganisms with A. sydowii on bran medium showed different degrees of induction between the cultures, among which B. subtilis could significantly induce A. sydowii to produce metabolites (Additional file 1: Fig. S1). After 12 days, the color of the hyphae of A. sydowii turned from dark green to light green in co-culture, and red-brown exudate was generated at the junction between B. subtilis and A. sydowii (Fig. 1). Moreover, a significant deadlock model was observed. This phenomenon indicated that during coculture, A. sydowii and B. subtilis generated compounds due to the stress response at the confrontation zone, inhibiting the growth of the other. In order to further explore this phenomenon, we collected the confrontation zone and analyzed the metabolites. The extracts from the bran medium of co-culture and pure culture were compared by LC-MS/MS, and 206 strong signal features whose intensity was over 10% of the highest intensity peaks were detected. The partial least squares discriminant analysis (PLS-DA) of these peaks revealed the intrinsic variation in the data set. In the score plot, the samples from the co-culture were clearly separated from the two pure cultures, indicating the changes of metabolite profile (Fig. 2a). The heatmap generated by hierarchical clustering analysis (HCA) of these 206 features based on the MS data showed that co-culture caused global changes in the metabolomes (Fig. 2d). The heatmap also revealed that 25 features were identified only in co-culture, indicating that about 12.1% of the candidate features were newly biosynthesized during co-culture. In addition, 156 features were recorded in both pure culture and co-culture. Among them, 70 features in the co-culture system were significantly decreased, while 4 features in the co-culture system were up-regulated when compared with the pure culture of B. subtilis. On the contrary, there were only 8 features in the co-culture that were significantly decreased, and 37 features in the co-culture that were up-regulated when compared with the pure culture of A. sydowii (Fig. 3). In the loading plot of PLS-DA, the 25 newly biosynthesized features were mainly deviated from the center and clustered into the lower right zone of the plot (Fig. 2b). These features showed good linear correlation. Only the features that had a large contribution to the classification generated by co-culture were distributed on this line. The features that contributed more to the classification were closer to the lower right, while the features that contributed less to the classification were clustered on the upper left of the line and were closer to the origin (Additional file 1: Fig. S2). In the meantime, the variable importance in projection (VIP) score data indicated newly biosynthesized features (N1-N4, N7, N13, and N20) with monoisotopic mass of m/z 168.4234, 266.1459, 282.1436, 282.4537, 353.1765, 402.1640, and 480.3248, respectively, were ranked in top features detected by VIP score (Fig. 2c). These data indicated that the newly biosynthesized features in co-culture made important contribution to group classification. Metabolomics study of newly biosynthesized metabolites in the co-culture To understand the structure of the newly biosynthesized metabolites, the 25 features were identified with the integrated approach. Here, we demonstrate our results using four annotation levels (Level 1-4). Level 1: the structures were annotated on MS-DIAL linked MS/MS databases by the characteristic product ions and neutral losses; Level 2: the metabolite ions were converted into structural information and the structures were annotated by the structure elucidation tool (MS-FINDER); Level 3: the structures were annotated putatively by the correlation with the known structures with the assistance of the network analysis tool (GNPS); Level 4: the structures The loading plot of the data analyzed by LC-HRMS. c The top compounds ranked based on the VIP score. The colored boxes on the right indicate the relative concentrations of the corresponding metabolite in each group. d Hierarchical clustering analysis (HCA) of the most significantly variable 206 features among the samples corresponding to the three different groups and represented on a heatmap (ranging from red color for high abundance to blue for low abundance). Data was acquired from three independent biological replicates were identified by separation, purification and NMR spectrum analysis. Next, the identification of feature N2 was discussed in detail as an example. The adduct ions of feature N2 were detected as m/z 265.1459 [M−H] − and m/z 325.2437 [M+CH 3 COOH−H] − , suggesting that the monoisotopic mass was 266.1518. The PLS-DA analysis indicated that this feature was detected only in the coculture and contributed greatly to the cluster. Firstly, the structure of N2 was identified by MS-DIAL (Level 1), which used deconvolution algorithm to obtain the retention time (RT) and m/z data sets of MS/MS data. The compounds were then annotated by comparing the characteristic products and neutral losses of features with the public MS/MS databases [22]. As no structural candidate was obtained from Level 1, the structure of N2 was annotated through MS-FINDER (Level 2), which was embedded in MS-DIAL software version 3.90. MS-FINDER was a strategy for computational MS/MS fragmentations. In this software, all isomer structures of the predicted formula were retrieved from metabolome databases, and the structure was predicated based on a combined weighting score considering bond dissociation energies, mass accuracies, fragment linkages, and nine rules of hydrogen rearrangement proposed during bond cleavages in lowenergy collision-induced-based fragmentation [18]. After comparing in silico spectra and the structures provided Table 2. There were 5 major classes of metabolites induced by co-culture, including sesquiterpenes, macrolides, esters, polyketides, and flavonoids. These metabolites of microorganism, are usually not generated in the normal condition of microorganisms, and can only be synthesized under certain stress. These compounds were reported to participate in the defense and communication between microbial cells, promote metabolism, and have a certain bacteriostatic effect [23]. Identification of the novel metabolites in the co-culture There were still 4 features (N6, N7, N9, and N20) that did not match any features in the public MS/MS spectrum library. To elucidate the structures of these potential novel metabolites generated through coculture, MS/MS data were analyzed with Level 3 process which was assisted by GNPS platform and manual dereplication. The GNPS approach can capture similar structures and analog features into the same cluster regardless of retention time in the LC-MS. The GNPS data indicated that three induced features, including N6 (m/z 350.1610), N7 (m/z 352.1765), and N9 (m/z 368.1713) were clustered, suggesting that these features have very close structural relationships (Fig. 5). As none of the three features was identified within the LC-MS/MS database, compound N7, with the highest content in LC-MS data, was separated and purified. Compound N7 was obtained as white powder. The UV absorption was at 213 nm, 254 nm, and 298 nm. The molecular formula C 18 H 27 O 6 N was indicated by Table 1. The 13 C NMR and HMQC spectra indicated the presence of a total of 18 carbon signals attributable to one carboxyl carbon at δc 172.05 (C-10); one ketone carbon δc 166.14 (C-7); three methyl groups at δc 28 The absolute configuration of N7 was deduced based on the comparison of experimental data and calculated ECD curves by Gaussian 09. The conformers were optimized using DFT at the B3LYP/6-31G (d) level in methanol. The energies were calculated through the TDDFT methodology at the B3LYP/6-31G (d, p) level in MeOH with PCM model (Additional file 1: Figs. S11-S14; Tables S1-S8). The calculated CD spectrum of N7 (1′R, 8S) agreed well with the experimental CD curve (Fig. 7), indicating that absolute configuration of N7 was 1′R, 8S, and was named as Serine sydonate. (Fig. 8). N6 and N9 (4′-hydroxyl or 5′-hydroxyl serine sydonate) were found to be novel compounds after database searching. As compound N20 cannot be connected with the other metabolites in Level 3, it was forwarded to Level 4 for direct isolation and purification. This compound was isolated as pale-yellow creamy solid. The molecular formula C 31 H 44 O 4 was indicated by the ESI-HRMS at m/z 503.3134 [M+Na] + (calculated for 480.3240), indicating 10 degrees of unsaturation. The UV absorption was at 212 nm and 273 nm. The 1 H NMR (DMSO-d6, 500 MHz) and 13 C NMR (DMSO-d6, 125 MHz) data were provided in the Additional file 1: compounds information. The 1D NMR data were consistent with the data of the known compound Macrolactin U identified by Xue et al. [28], and its relative configuration was 4S, 5S. However, the methyl group H 3 -29 (δ 1.10, d) and H 2 -6 (δ 2.56, m) were defined as trans due to the NOESY relationship between H 3 -29 and H-5 (δ 4.25, m) in compound N20 (Additional file 1: Fig. S16-S23). Thus, the relative configuration of C-4 and C-5 were S and R, respectively, (Fig. 9), which was different from S and S in Macrolactin U. Therefore, compound N20 was identified as the isomer of Macrolactin U and named as Macrolactin U′, which is also a novel compound. Unfortunately, the absolute configuration of N20 was not determined yet as no obvious difference was identified between ECD and crystal of N20, which could not be obtained due to the limited amount of the compound. Thus, a total of 25 features induced only in the coculture were identified by the combination of the computational approach (MS-DIAL), the web-based tools (GNPS and MetaboAnalyst) with chemical isolation and purification ( Table 2, Additional file 1: Fig. S24). Four of the features were novel metabolites, including two compounds confirmed by NMR. Five known compounds were also purified to verify the validity of the approach. Biological activity assay The isolated compounds N1-N4, N7, N13 and N20 were evaluated for their anti-nematode activity and antidiabetic activity. Among the compounds, compound N3 showed a certain degree of anti-nematode activity with an IC 50 of 50 μM. Furthermore, compounds N2-N4, N7 and N13 exhibited potent activity against SHP1 and PTP1b, both of which are targets for the development of diabetes (Table 3). In addition, compounds N7 and N13 displayed inhibition activities against CD45 with IC 50 values of 16.0 μM and 17.9 μM, respectively. Discussion Microbial metabolites have always been considered as a very important source of new drugs, due to their various biological activities such as anti-bacteria, anti-oxidation, and anti-tumor. When two microorganisms are co-cultured, new metabolites can be biosynthesized by one or both microorganisms as a result of interspecific crosstalk or induction by biochemical signaling molecules [29]. For example, Akone et al. [30] co-cultured Chaetomium sp. with B. subtilis, obtaining 5 new compounds and 7 known compounds. However, how to characterize the overall changes in the metabolite profile induced by coculture and to identify the newly biosynthesized metabolites is still a complicated and challenging task. The fragmentation pattern in the MS/MS spectrum represents a specific feature of a certain compound. The structures and chemical properties of the molecules determine the observable fragmentation patterns in MS/MS data. Therefore, similar fragmentation patterns of related compounds are used as indications of chemical relatedness [31,32]. In the field of structural prediction of natural products, some useful tools draw great attention. For instance, computational MS-DIAL program can be used to obtain deconvoluted spectra from high-resolution LC-MS data; another computation based MS-FINDER program can be used for structure elucidation of unknown HR-MS spectra through fragment comparison and MS database searching [18,33]. GNPS provides a visualization approach to Sun et al. Microb Cell Fact (2021) The key COSY and NOESY of compound N20 Table 2 List of induced features only in the co-culture of A. sydowii with B. subtilis analyzed by LC-HRMS The italicized values refer to compounds which were obtained through separation and purification detect sets of spectra from related molecules (molecular networks), even when the spectra themselves do not match any known compounds. Using these tools, some metabolites of co-culture were predicted. For example, Ernest et al. [34] analyzed 9 co-cultures of marine-adapted fungi and phytopathogens by GNPS and annotated 18 molecular clusters, 9 of which were exclusively produced in co-cultures. Several clusters contained compounds that could not be annotated to any known compounds, suggesting that they are putatively newly metabolites. However, as only GNPS was mainly involved in these studies, and due to the limited volume of MS/MS library in GNPS, few structures can be predicted. Most structures of newly biosynthesized compounds have not been elucidated yet. Recently, some researchers have also tried to integrate multiple tools to assist structure elucidation. Lai et al. [35] showcased a combined workflow, including GC-MS metabolome database, MS-DIAL and MS-FINDER, to analyze the volatile organic compounds, and three biomarkers and two propofol derivatives were annotated successfully in over 110, 000 biological samples. To our best knowledge, there is still a lack of integrated, effective and accurate strategy to reveal the changes of metabolite profile and characteristics in microorganism following treatment to activate silenced genes, including co-culture. In this study, MetaboAnalyst, MS-DIAL, MS-FINDER, and GNPS were integrated with the publicly available spectral library to compare the MS/MS data, including common losses of MS and fragmentation similarity, while obtaining the same molecules, analogs, or metabolism families, thereby facilitating structural analysis. Analysis of the co-culture of A. sydowii and B. subtilis by this new approach revealed 206 features induced in the confrontation zone, and 25 features which occupied 12.1% of the detected features were newly induced by co-culture. Especially, 4 features (N7, N20, N9, and N6) were identified as novel compounds. All the 25 newly biosynthesized metabolites were identified by the integrated approach, and the accuracy of the integrated approach was also partially verified by the isolation, purification and spectrum analysis of five newly biosynthesized metabolites with high content. These results suggested that this new approach provided an effective and time-efficient manner to characterize the overall changes of the metabolite profile and to elucidate the structures of metabolites simultaneously. In the integrated approach, N6 and N9, which were derivatives of N7 with a low content, were detected by GNPS molecular network based on the similar fragment pattern. Their structures were further elucidated with the assistance of MS-DIAL and MS-FINDER programs. These results suggested the new approach was effective to discover trace derivatives of metabolites, and would help to understand the global metabolite profile changes in the co-culture system. Interestingly, the newly biosynthesized features were linearly correlated in the loading plot of the PLS-DA analysis in MetaboAnalyst (Additional file 1: Fig. S2). In the previous co-culture study, a similar phenomenon was presented in the PCA analysis using SIMCA-P software when two fungi, Trametes versicolor and Ganoderma applanatum were co-cultured ( Fig. 1 in reference Xu et al. [36]), although the authors did not describe this linear correlation. These data suggested that using the linear correlation rule of newly biosynthesized metabolites in PLS-DA or PCA analysis, the new biosynthesized metabolites, especially the newly biosynthesized metabolites with low content, might be easily figured out, although the mechanism of this rule still needs to be clarified. No. Observed m/z; RT (min) Calculated m/z Δ mass (ppm) Molecular formula Name Analysis of the structural features of the part of newly biosynthesized metabolites also revealed their producing microorganism. Sydonic acid (N2) has been reported as a typical metabolite of Aspergillus sp. [25], suggesting that this compound and its structurally similar compounds N3, N4, N6, N7, and N9 were produced by A. sydowii under the inducing stress of B. subtilis. Similarly, N21 has been reported to be produced by the species of Bacillus [28], suggesting that this compound and its analogues N13 and N20 were supposed to be produced by B. subtilis. The structures of 25 newly biosynthesized metabolites in the co-culture can be categorized into five classes, including macrolides, sesquiterpenes, esters, polyketides, and flavonoids. N13 and N20 belonged to macrolides. Macrolide antibiotics have multiple conjugated double bonds, hydroxyl side chain groups, and macrolide skeleton structures. This class of antibiotics has no effect on bacteria, but has an inhibitory effect against fungi. Macrolides can interact with sterols on the membrane of fungal cells, causing the leakage of small molecules and ions in the cell content from the transmembrane pores, eventually leading to the death of fungal cells [37]. For instance, Macrolactin A (N13), was reported to display meaningful antifungal activity with MIC values of 0.04-0.3 mM [38]. Compounds N2, N3 and N4 are classified into sesquiterpenoids, which are widely distributed in nature with anti-bacteria, anti-inflammation and immunoregulatory activities [39]. For example, (7S)-(−)-10-Hydroxysydonic acid (N3), was reported to display inhibitory activities against S. aureus with IC 50 values ranging from 31.5 to 41.9 μM [40]. (R)-(−)-Hydroxy Sydonic acid (N4), showed broad spectrum activities against S. aureus and B. cereus, with MIC less than 25 μM [26]. These results, together with the analysis of the producing microorganisms of these compounds, indicated that in order to exert the antagonistic effect, A. sydowii and B. subtilis induced the biosynthesis of macrolides and sesquiterpenoids, respectively, to inhibit the growth of the opponent. In addition, inducing the production of sesquiterpenoids by the fungus might play important roles in symbiosis, such as enhancing the immune regulation of bacteria, removing free radicals, enhancing the vitality of bacteria and gaining more nutrients effectively. As the response of antagonistic and symbiosis effects, expression of silent genes was activated and the new metabolites were biosynthesized effectively during the co-culture of A. sydowii and B. subtilis. The biological assay indicated that purified newly biosynthesized metabolites showed specific inhibitory activities against PTP1b, SHP1 and CD45. The PTP1b assay data of compounds N2-N7 in this study indicated that the side chains of these compounds influenced the activities. The addition of hydroxyl groups to the side chain lowered the activity significantly. However, the addition of hydroxyl groups to the side chain had no obvious effect on the inhibitory activity against CD45. Further research is still needed to reveal the structure-activity relationship among these compounds, which will help to design new agents for the treatment of diabetes or immunomodulation. Conclusions Co-culture of A. sydowii and B. subtilis increased the diversity of metabolites. The new integrated approach in this study, which includes MetaboAnalyst, MS-DIAL, MS-FINDER, and GNPS, explored the overall changes of microbial metabolites profile of co-culture, and elucidated the structural information of 25 compounds. Four of the compounds identified are novel. The structures of the other 5 compounds were purified and their NMR data were analyzed to verify the accuracy of the new approach. These data suggest that the new approach is effective and reliable for the rapid identification of metabolites. The biological activities of 7 compounds isolated showed relatively strong inhibitory activity of N2 to PTP1b, indicating that the co-culture strategy could induce the production of bioactive secondary metabolites, and provide a valuable platform for the discovery of more novel secondary metabolites. The co-culture strategy will also contribute to the revelation of the metabolic mechanisms that can activate silent genes. General experimental procedures HPLC analysis was performed with a Waters HPLC system equipped with a 2998 detector and a 1525 pump. Routine detection wavelengths were at 235, 254, 280, and 340 nm. Twenty (20) μL of the samples was injected to a Shimadzu TC-C 18 column (10 × 250 mm, 5 μm), and the following gradient was used (mobile phase A: 0.2% CH 3 Compounds were prepared by silica gel column chromatography and an LC3000 semi-preparative HPLC system (Beijing Chuang Xin Tong Heng Science and Technology Co., Ltd). The analytical, semi-preparative and preparative HPLC was performed using an ODS column from Shimadzu Co. (TC-C 18 , 10 × 250 mm, 5 μm), an YMC semi-preparative column (YMC-Pack Pro C 18 RS, 10 × 250 mm, 5 μm), and an YMC preparative column (YMC-Pack ODS-A, 20 × 250 mm, 10 μm). The NMR data were recorded on a Bruker 500 MHz spectrometer from Bruker Co. Microorganisms and co-cultivation experiment Aspergillus sydowii was isolated from a piece of deep-sea mud from Dalian, China. In order to further excavate the secondary metabolites of the strain, A. sydowii was cocultured with B. subtilis. The fungal and bacterial strains were activated in potato dextrose agar (PDA) medium (200 g Potato/L, 20 g dextrose/L and 15 g agar/L) for 3 days. Then, each strain was suspended in 2 mL of sterile water. To establish individual pure cultures, 80 μL of bacterial suspension was inoculated into a 90 mm petri dish containing 20 mL of bran medium (100 g bran/L, 20 g dextrose/L, 15 g agar/L). For the co-culture, 80 μL of each bacterial suspension of A. sydowii and B. subtilis was inoculated approximately 10 mm apart on the PDA medium. The plates were incubated at 28 °C for 12 days. Measurement of the metabolome The extracts were dissolved in 150 μL methanol and centrifuged at 16000×g for 10 min. The supernatants were transferred into HPLC autosampler vials and analyzed on a LTQ Orbitrap XL mass spectrometer (Thermo Fisher Scientific, Hemel Hempstead, UK) at a flow rate of 0.6 mL/min. The ESI conditions were as follows: the spray voltage was fixed at 4200 V; the sheath gas pressure was 35 arb; the auxiliary gas pressure was 10 arb and the heater temperature were 320 °C, and the capillary temperature was 300 °C. The mass scanning range was m/z 50-1200 Da in centroid mode with a scan rate of 1.5 spectra/s. The mass detection was performed by an electrospray source functioning in positive and negative ion mode at 15,000 resolving power. The mass measurement was externally calibrated before the experiment. Each full MS scan was followed by data-dependent MS/MS on the three most intense peaks using stepped collision-induced dissociation (35% normalized collision energy, isolation width 2 Da, activation Q 0.250). Twenty (20) μL of the samples was separated by a Shimadzu TC-C 18 column (10 × 250 mm, 5 μm). The mobile phase A was water with 0.1% acetic acid and the mobile phase B was acetonitrile with 0.1% acetic acid. The elution gradient of reversed-phase liquid chromatography was as follows: 0-10 min, 20% B; 10-30 min, 20-80% B; 30-35 min, 80-85% B; 35-40 min, 100% B; 40-45 min, 25% B; All the samples had three independent biological replicates. The solvent (MeOH) and pure culture were injected under the same conditions as controls. Metabolites profile and structure analysis In order to fully exploit the differences of the metabolite profile in co-culture and pure cultures, MS-DIAL (Version 3.90), the computational approach which helps to rapidly characterize the structure of the metabolites [22], and MetaboAnalyst [41], the web-based tools for comprehensive metabolomic data analysis and interpretation, were integrated. This approach mainly includes: (1) The MS/MS data were converted to abf format by Analysis Base File Converter, and then subjected to MS-DIAL program to find the peak list alignment. The MS tolerance was set as 0.01 Da, the minimum peak height was set as 1 × 10 7 , and the maximum charge was set to 2. (3) Multivariate analysis of the global metabolites profile with MetaboAnalyst. The aligned data were uploaded to MetaboAnalyst, and the data were first normalized by the sum and auto scaled. The data then were analyzed with PLS-DA to reveal the global profile changes, and the heatmap that could show clustering of the features and visualize the differences between groups was also obtained. (4) Structural identification of the metabolites assisted with MS-DIAL and GNPS. This step mainly included four levels. Level 1: structure annotated on MS-DIAL linked MS/MS databases by the characteristic product ions and neutral losses. The MS/MS public databases mainly include ReSpect, BMDMS-NP, Meta-boBASE, Fiehn/Vaniya and natural product library in positive and/or negative manner. Level 2: structure annotated on MS-DIAL linked MS-FINDER program. The metabolite ions were converted into structural information with MS-FINDER. The number of carbon atoms and formula can be determined and the structural formula of all substructures were defined. Compared with the public spectral databases including NIST 14, MassBank, Metlin, ReSpect, and MetaboBase, the compounds with monoisotopic mass error within ± 5 ppm and a structure score higher than 5 were screened for mass spectral peak matching. Then the structures were searched on Reaxys and SciFinder database to confirm whether they were derived from natural products. Finally, the ontology for all substructure forms was defined. Level 3: structure annotation assisted by GNPS. LC-MS/MS data was uploaded to GNPS to create the network between the metabolites. Thus, the features whose structure scores was less than 5 or its fingerprints could not match any compounds in MS/MS database, which might be the novel compounds, might be correlated to the other structures. If any structures in the molecular network can be identified in LC-MS/MS database, the structure of other features in the network can be deduced by comparing the difference between the MS/MS spectrum of unknown and available structures. Otherwise, at least one of the compounds in the network would be separated, purified and its NMR spectrum was analyzed to elucidate the structure of the feature, and then the other structure of the network was deduced accordingly. In GNPS analysis, the LC-MS/MS data were first converted to mzXML format by MS Convert and processed by MZmine 2 [42] and then uploaded to GNPS. The parent mass tolerance was set as 2.0 Da. The ion tolerance was set as 0.5 Da. The maximum connected components value was set as 19 and the minimum cluster size was set as 2. All matches between the network spectra and the library spectra were required to have a score above 0.7 and at least six matched peaks. The output of the molecular networks was visualized using Cytoscape (Version 3.6.1) [43]. Level 4: structure identification by separation, purification and NMR spectrum analysis. The features whose structures could not be determined by Level 1-3 were separated and purified by column chromatography, and analyzed with 1D and 2D NMR spectrum. To verify the veracity of the identification approach, some structures with higher VIP scores in PLS-DA analysis were also separated, purified and analyzed with NMR data. Extraction and isolation of the metabolites After cultured for 12 days, the confrontation zones of co-culture were collected and soaked in ethyl acetate to extract the compounds of interest. The extract was evaporated under reduced pressure, and 30 g of the crude extract was obtained. Then, the crude extract was subjected to a silica gel column and a gradient elution using N-hexane/ dichloromethane (90:10 → 0:100 over 30 min, 0:100 hold for 10 min) at a flow rate of 12 mL/min. Three fractions, F 1 -F 3 , were obtained from the separation. The fraction F 1 was further purified using DAISO ODS (20% acetonitrile to 100% acetonitrile over 35 min) at a flow rate of 20 mL/ min, followed by preparative HPLC with acetonitrile-H 2 O (30% isocratic) to yield N1 (20 mg), N3 (19 mg), N4 (13 mg) and N7 (50 mg). The fraction F 2 was processed in the same manner as F 1 with acetonitrile-H 2 O (60% isocratic) to yield N2 (60 mg) and N13 (21 mg). The fraction F 3 from the separation was further purified by a YMC preparative column and a YMC semi-preparative column at 3 mL/min with acetonitrile-H 2 O (75% isocratic) to yield N20 (8 mg). Computational details The theoretical calculations of compound N7 were performed using Gaussian 09. Firstly, the conformations at B3LYP/6-31G (d) level were optimized in MeOH and the theoretical of ECD was determined using Time Dependent Density Functional Theory (TDDFT) at B3LYP/6-31G (d, p) level in MeOH. Secondly, the ECD spectra was simulated using Gaussian function with band width σ = 0.30 eV. Finally, the ECD spectra of compound N7 was obtained by weighing the Boltzmann distribution rate of each geometric conformation. Protein tyrosine phosphatase 1b inhibitory assay The PTP1b inhibitory activity of the tested compounds was measured at 37 °C using p-nitrophenyl phosphate (pNPP) as the substrate. The reaction was performed in a 96-well plate (final volume of 150 μL) and incubated for 30 min in the assay buffer (50 mM citrate (pH 6.0), 0.1 M NaCl, 1 mM EDTA, and 1 mM dithiothreitol) at 37 °C. Subsequently, the reaction was terminated by the addition of 10 M NaOH and the amount of p-nitrophenyl was determined by measuring the absorbance at 405 nm.
2021-02-13T14:26:24.147Z
2020-12-14T00:00:00.000
{ "year": 2021, "sha1": "3b53a7eb19a673b0e59baa674816c359d963387a", "oa_license": "CCBY", "oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/s12934-021-01527-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2833628b16b1a07f4633b197b7c65662bf4fed9a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254387776
pes2o/s2orc
v3-fos-license
Microstructure and phase composition of thin protective layers of titanium aluminides prepared by self-propagating high-temperature synthesis (SHS) for Ti-6Al-4V alloy Article abstract Introduction Titanium aluminides are promising materials in aerospace due to their favorable ratio between density and strength combined with elevated oxidation resistance and creep resistance at high temperatures (up to 800 °C [1]) compared to commonly used aerospace materials such as aluminium alloys [2], titanium and Ti6Al4V [3]. Titanium and aluminium can form a variety of intermetallic phases including γ-TiAl, α 2 -Ti 3 Al, TiAl 2 and TiAl 3 of which γ-TiAl are the most industrially important for their high temperature properties and their relatively low brittleness [4]. It is very difficult to achieve a homogeneous structure in the production of TiAl intermetallics by conventional methods such as vacuum arc remelting and the production is expensive due to the high melting temperature of these aluminides. Big disadvantage is also the high content of interstitial impurities such as oxygen or nitrogen, as well as the porosity of the material which leads to a mechanical properties detoriation [5]. Therefore, new ways of producing these aluminides are being investigated. It was proven that titanium aluminides can be prepared by SHS [6,7], additive manufacturing using selective laser melting (SLM) or electron beam melting (EBM) [8], hot isostatic pressing (HIP) [9], spark plasma sintering (SPS) [10] and other methods. SHS is often used to prepare the desired compound, which is then crushed, and the resulting powder is suitable for another compactization method [11,12]. This work presents a use of SHS for creating a thin TiAl-based coatings on Ti-6Al-4V alloy in one step. Experimental/Experiment Aluminium powder was deposited on Ti-6Al-4V alloy, uniformly dispersed with ethanol, and fixed with a 10 nm layer of sputtered gold. The SHS process took place in an evacuated quartz ampoule placed in a furnace heated to 800 °C with duration of 15 min, schematically shown in Fig. 1. Obtained bulk sample was observed by SEM (TESCAN VEGA 3 LMU) with EDS detector, a relief of the sample surface was taken using a Keyence VHX-J250 microscope, phase composition was measured by μ-XRD (D8 Discover with VANTEC 2D detector) and the distribution of the individual phases was determined by EDS analysis. Subsequently, the sample was annealed for 3 hours under the same conditions and the same analyses were performed. As shown in Fig. 5 A, the surface of intermetallic coating oxidized creating mainly oxides of iron and aluminium (determined by EDS) after annealing even though the annealing took place in a vacuum -oxygen was probably deposited on the matrix before annealing. The presence of iron probably comes from contamination during cutting the material into cross-section. The most significant oxidation occurred on the edges of the coating. Although Au particles were also observed on the edges (Fig. 5 A), those on the whole surface were dissolved in the coating, see Fig. 5 Microstructure of surface of the sample after annealing, A -magnification 500x, B -magnification 2000x To determine chemical and phase composition in the bulk, a cross section was made and EDS and μ-XRD analysis were performed. As can be seen in Fig. 6 A/B, the coating porosity is independent of annealing. The annealed material (Fig. 6 B) shows an extended bonding layer between the coating and the Ti-6Al-4V matrix. The bonding layer of unannealed material reaches 2.8 ± 0.4 μm and is enlarged to 7.6 ± 0.6 μm towards the coating. During the annealing, another chemical reaction was taking place and according to EDS point analysis, see Tab. 1, γ-TiAl phases were formed in the places of this bonding layer (spectrum 4 in Fig. 6). However, the results of the EDS analysis are greatly influenced by the surrounding elements since the interaction volume of the electron beam with the material is comparable to the size of the observed phases. μ-XRD analysis was performed to confirm TiAl 3 and γ-TiAl phases determined based on the atomic weight ratio of the elements. As shown in Fig. 7, both diffractograms before and after annealing contain only one maximum of γ-TiAl or cubic TiAl 3 (at a position of 45.7°) and thus the phases cannot be clearly determined. The X-ray beam interfered with the bulk of the acrylic resin because it is impossible to target a specific spot on the material with sufficient accuracy (50 μm) and the acquired signal comes from an even larger area [13]. There are multiple maxima belonging to it in the diffractogram. Although the phases were not confirmed by μ-XRD analysis, the combination of μ-XRD and EDS suggests that the coating is formed by the intermetallic phases TiAl 3 and γ-TiAl. Fig. 6 Microstructure in cross-section of the sample before (A) and after (B) annealing, magnification 5000x Tab. 1 Chemical composition in wt. % acquired using EDS point analysis of spectra 1-4 in Fig. 6 Conclusion In this paper was proven that titanium aluminides coating can be prepared in one step using SHS. The coating consisted most likely of TiAl 3 and γ-TiAl. For further phase composition specification, more precise analyses are required. For example, to observe the coating interface by TEM on a precisely localized lamella prepared by FIB-SEM. To achieve lower porosity of the coating it is possible to apply pressure on the sample before performing SHS. A higher proportion of the intended γ-TiAl could be achieved by increasing the annealing time.
2022-12-08T16:07:54.230Z
2022-12-06T00:00:00.000
{ "year": 2022, "sha1": "a081bc31e58d2638ae8c0b2f70a524ced081e63a", "oa_license": "CCBYNC", "oa_url": "http://journalmt.com/doi/10.21062/mft.2022.069.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d427c55468bc9f12d59bac4039943af6cb57f59", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
269038630
pes2o/s2orc
v3-fos-license
Mission Shakti: A Silent Revolution In Odisha :- Women empowerment through Self-Help Groups has emerged as a potent force for fostering socio-economic development worldwide. SHGs serve as a platform for women to come together, pool resources, and engage in collective decision making, thereby enhancing their socio-economic status and agency. Through SHGs, women gain access to financial services, acquire entrepreneurial skills, and develop a supportive network, enabling them to break the cycle of poverty and marginalization. INTRODUCTION Women are an essential component of any improved economy.They have a significant and vital role in society.It is impossible to overlook their influence on how families and society are shaped.The degree of honour, independence, protection, and role that women are accorded, along with the status they enjoy, are indicators of the level of civilization that any particular culture has reached.Since women make up half of a country's population and have an impact on the growth and development of the other half, the status and development of women determines a nation's ability to prosper and expand.The women's community has made numerous financial contributions to the developing economy overall.Consequently, without the involvement of women, no one can consider the economic advancement of his nation.In order to this to become a reality, the system must guarantee the women's community equal rights and increased autonomy.Empowering women involves more than simply the country's economic growth; it also involves social fairness, gender equity, and general social peace.By doing away with gender inequality in society, the Self-Help Group (SHG) model of financial inclusion in India empowers the women's community.The model discusses ways to highlight the abilities of underprivileged women. II. WOMEN EMPOWERMENT The general phenomenon in today's world is economic development.All the countries are falling behind this.But it has been observed that where participation of people from all strata of the society has been exercised, the pace of economic progress is tends to be high.In this context, the inhabitants of the society have enjoyed the pace of development, what the society or the country has enjoyed.This is called real progress of the country where an optimal alignment of macroeconomic outcomes and micro economic outcomes is made possible.But the question arises how people will participate in this journey, how people will showcase their talent and skills.Unless, they get the requisite scope and recognition for showcasing their talent and skills, it is difficult.Therefore, to provide recognition and scope, we need to empower the under privileged especially the women. Empowering women means, ensuring the extended commands over their lives, bodies, and social status.Since the development of power dynamics, the class hierarchy, occupational dangers, and certain sociocultural norms and traditions, the term "inequality" has been used.Gender inequality was a component of other inequality, which prevented women from advancing in terms of education, immunisations, and social necessities.Their enormous potential was disregarded, and they were denied their fair share in the development process.The main explanations for women's low ranking are their developmental trajectory and the widely held belief that they are undervalued.It is imperative to address the literacy, nutrition, health, and empowerment of women if we want more of them to be change agents. The concept of Women Empowerment is of recent origin.The idea of empowerment was initially introduced in 1985 during the International Women's Conference in Nairobi.The conference came to the conclusion that, via positive intervention, empowerment is the transfer of power and control over resources in favour of women.Thus, empowerment means to make oneself powerful and capable of obtaining and securing more and more power and facilities through different means.It refers to the state of being empowered or act of powering.The International Encyclopaedia (1999) defines power as the ability and means to steer one's life towards desired position or goals in politics, the economy, or society.Increased knowledge and resource availability, increased decision-making autonomy, increased life planning capacity, more control over life-influencing events, and freedom from traditions, beliefs, and practices are all benefits of empowerment.Women empowerment, then, is the process of giving women the power to realise their rights and duties, both to themselves and to others.It bestows the ability or strength to fend off prejudice imposed by a society ruled by men.It also gives women more autonomy in life and the ability to organise themselves to become more independent.Gender-based discrimination is eliminated in all institutions and societal structures when women are empowered.It guarantees women's involvement in national and public policy and decision-making processes. III. WOMEN EMPOWERMENT IN INDIA The Indian government proclaimed 2001 to be "Women Empowerment Year" in order to highlight a vision in which women and men are partners on an equal footing.Even after more than 50 years of independence, half of India's population still lived in poverty, despite the Constitution's explicit guarantees of equality and equal opportunity for all people regardless of caste, creed, gender, or religion.This makes the need for such an effort evident.Laws and constitutional provisions supporting women produced little results since women's emancipation was impeded by an unfriendly patriarchal society.The government of India attempted to make it easier for women to form small Self-Help Groups (SHGs) after learning from rural areas that these women have begun to form small groups with small contributions from each member in order to develop saving habits and accumulate money for starting micro-enterprises both individually and collectively by taking out loans from the common fund.The procedure took off like a mission, and soon afterward, success tales began to come in from all over.The State governments carefully considered the Central governments advise and created plans and programmes in their own States in accordance with it.As a result, SHGs began to operate and developed into rather useful instruments for reducing poverty and promoting economic independence.To aid in their operation, the SHGs were connected to NGOs, banks, cooperative banks, regional rural banks, NABARD, and cooperative banks.The women involved in this process began to think and act differently, and they had distinct worldviews than the women who were not involved.These organisations began operating in Odisha as a result of the state's premier initiative, Mission Shakti, which was introduced on March 8, 2001, by the honourable chief minister of Odisha with the goal of reducing poverty and empowering women.SHGs were then connected to other government development and welfare initiatives to ensure the successful execution of these initiatives.  The Concept of Self-Help Groups The Self-Help Group (SHG) approach is a novel approach to rural development that aims to improve the wellbeing of the impoverished, give them access to resources and credit, boost their self-esteem and confidence, and establish their credibility in all spheres of life.A self-help group is a voluntary, autonomous organisation of women from comparable socioeconomic backgrounds who gather together to encourage saving among themselves.The SHG's assistance to reduce poverty takes the shape of economic programmes that create jobs and offer microfinance services to the underprivileged so they can learn new skills and diversify their careers.Swarnajayanti Swarozgar Yojana, which was launched in 1999 with the goal of organising the impoverished into Self-Help Groups, embraced this new project. Therefore, a SHG is a small, economically homogeneous affinity group of rural poor people (10 to 20 members), who come together voluntarily to save modest amounts of money on a monthly basis.These savings are then put into a common fund to cover members' emergency needs and to give collateral-free loans that are chosen by the group.They are now acknowledged as helpful resources for the underprivileged and as a substitute method for obtaining quick credit through thrift.Through this programme, women are able to maintain stable financial lives and cultivate an attitude of saving.Women's equality as participants, decision-makers, and beneficiaries in the democratic, economic, social, and cultural domains of life is improved by Self-Help Groups.Group cohesion, a spirit of thrift, demand-based lending, collateral-free loans, women-friendly loans, peer pressure to repay loans, skill development, capacity building, and empowerment are the fundamental tenets of SHGs.  Evolution of Self-Help Groups: Self-Help Groups originated in the year 1975 by Mohammed Yunus in Bangladesh.Professor Yunus started an innovation plan by establishing the "Grameen Bank" to provide financial assistance to poor and downtrodden people of Bangladesh.Thus, the system of Micro-Credit and the women Self-Help Groups came to the lime light of the society.The strategy made a tremendous effect in Bangladesh in poverty alleviation by empowering the poor women.The SHG module was so popular that most of the countries of the world including, Bolivia, Indonesia, and Mexico were attracted and followed the path of Mohammed Yunus to fight against the poverty in their respective countries.SHG were introduced in India by NABARD (National Bank for Agriculture and Rural Development) in 1986-87.But the real effort was taken after 1991-92 through the linkage of SHGs with the banks.  Indian Scenario: With the founding of the Self-Employed Women's Association (SEWA) in 1972, SHGs first appeared in India.Women have made some meagre attempts at self-organization even earlier.For instance, the Textile Labour Association of Ahmedabad established its women's wing in 1954 with the goal of teaching sewing, knitting, and other skills to the women who come from mill worker families.Ela Bhatt, the founder of SEWA, organises low-income and independent female labourers in the unorganised sector, such as potters, hawkers, weavers, and others, in order to increase their earnings.NABARD  Main Objectives Women Self-Help Groups: The usage of women's self-help groups as a tool for different developmental therapies is growing.India's rural women are able to get loans and extension help for a range of production-oriented and income-generating activities through the establishment of Self-Help Groups (SHGs).After formation of groups, the members assembled in a group meeting at least one's in a month and discuss their group achievement and decide their future course of action.Sometimes, fines are also collected from the absentee members and that money is deposited in Bank Passbook.In every group meeting, the proceedings are recorded by the secretary or president.Additionally, the group keeps basic records such individual pass books, bank pass books, cash books, attendance registers, loan ledgers, and general ledgers.Every choice regarding lending should be made by the participants in the group meetings through participatory decision making.Some of the important goals and objectives of this group are:  Raising awareness among the target area's women of the importance of SHGs and their role in their journey of empowerment. To foster a sense of community among women. To increase women's self-assurance and competence. To foster women's ability to make decisions collectively. To encourage women to save more and to make it easier for them to build up their own capital resource base. To encourage women to assume societal obligations, especially those pertaining to the growth of women. To achieve societal parity between genders. To raise women's economic standards by encouraging them to work for themselves. To make it possible to use government assistance programmes and bank financing. To assist the members in escaping the grasp of lenders. To raise funds in order to support economic activity This approach has proved successful in India by providing better economic conditions through various income generating activities as well as by creating awareness about health, hygiene, sanitation, cleanliness, environment protection and education.It has been able to create a mass political and economic consciousness among women and their status has been improved.  Role of SHGs in Socio-Economic and Political Empowerment of Women: Women had the same status and rights as men during the Vedic era.But when Muslim monarchs arrived in mediaeval India, women were relegated to the lower sex.During this time, a number of wicked customs were followed, including child marriages, sati, Devadasi ritual, and the killing of female infants.Despite a few notable female leaders, the status of women in India remained unchanged.During the British era in the contemporary age, women's status experienced a minor improvement.They introduced a system of education for women, which changed some of the fundamental status quo for women. The Indian government proclaimed 2001 to be the "Year of Women's Empowerment," emphasising the equality of women and men.SHGs developed into an effective tool for empowering women in the rural economy and reducing poverty.They raised awareness of women's wellbeing, entrepreneurship, and self-employment.According to a study, many SHGs in rural areas have improved the socioeconomic circumstances of rural families, and NGOs in India have acted as a middleman between the government and rural development.Banks can provide small loans to the underprivileged without worrying about non-performing assets when they work with NGOs and Self-Help Groups.Another study demonstrates that SHGs have enhanced financial services for the underprivileged and raised their social standing through their network of commercial banks, co-ops, regional rural banks, NABARD, and non-governmental organisations.SHGs are crucial to boosting gainful employment as a result.According to a study on SHGs in India, 47.9% of sample householders rose over the poverty level from their pre-SHG circumstances, and almost 59% of them reported a gain in assets. As a result, social empowerment was demonstrated by the members' increased self-assurance, treatment within the family, communication abilities, and other behavioural traits.It was mentioned that women's empowerment has three dimensions: economic, social, and political.In daily life, women make fewer decisions than men do.SHGs and their microbusinesses are causing a change in the circumstances.According to a different research, sixty percent of women engage in economic activities associated with agriculture and related fields.A study provided a methodical and organised way to introducing good imagery of Indian women.Women's power over their income, credit, and savings has significantly improved.When it comes to enhancing the facilities for women, NGOs and government organisations are on an equal footing.Banking practices have a beneficial role in SHG microfinance schemes.It was found that the impoverished and those living in rural areas were developing saving habits, which opened doors to improved technology and different forms of promotional support.Women's rates of education and literacy make a significant contribution, enabling them to realise their potential and gain power. Poor people are connected to Self-Help Groups (SHGs) through institutions that support them, mainly NGOs, banks, and government officials.This scheme sets up savings, frequent loan repayment installments, training, and regular meetings.This programme also covers marketing, family planning, healthcare, basic literacy, and occupational skills.Women are more powerful and have more negotiating power when it comes to dividing resources within homes because they earn more money and recognise the worth of time.Her decision to invest in children's education, housing, and nutrition is prompted by her increased income.Women get the chance to break out from the daily routine and facilitate problem sharing among themselves through the regular group sessions that SHGs conduct.Interaction with women in SHGs, both internally and among other members, boosts confidence and exposes one to eloquent and Pursue her interest.Increased mobility, enhanced networking opportunities, and increased communication all support the empowerment of women.The influence of social norms on women greatly influences their decision-making.Women are more inclined to actively engage in public life and are more equipped to pursue their interests in local politics and society as a result of the interaction between SHG institutions and SHGs.The rise in the number of women serving in local government also empowers women.Empowerment of women is a process as much as a result.  Role of SHGs in Odisha: Women of Odisha have historically made significant contributions to the state's sociocultural, political, and economic domains.They persist in doing so in spite of insurmountable obstacles and difficulties such as poverty, exploitation, sexual assault, ignorance, and a lack of autonomy.We are grateful that the Odisha government has reaffirmed its commitment to achieving gender equality for women.It also pledges to centre all development initiatives around women.Odisha now has enormous potential for female strength.Not only are Self-Help Groups (SHGs) becoming more prevalent in the state, but more groups are also stepping up to offer a range of trades with Bank Linkage in order to support and advance the development of women.Now-a-days women SHGs are taking up varieties of income generation activities such as piggery, goatery, pisiculture, dairy, setting up PDS outlets, kerosene dealership, floriculture rope making etc. 8th March 2001 marked the launch of 'Mission Shakti' an initiative by the Chief Minister of Odisha to empower women across the state through Self-Help Groups.A couple of decades down the line, it is safe to say that it has slowly and steadily transformed the lives of women, their families and spaces they reside in.Today, it is a testimonial of successes over 70lakh women assembled into 6lakh Self-Help Groups to help one another and gain prosperity.  Mission Shakti: A Silent Revolution in the State of Odisha SHAKTI is a strong term that symbolises the feminine strength and alludes to the primordial female force that powers the cosmos.An energy so strong that it affects everything that is in its way and is not just felt by women.Everything is changed by it. The well-named Mission Shakti was the idea of Odisha Chief Minister Shri Naveen Patnaik.It was introduced on March 8, 2001, with the goal of empowering women by forming women's self-help groups.It is quite amazing how much "Mission Shakti" has changed over the past 20 years.Mission Shakti is no longer a programme; instead, it is a silent revolution affecting every resident of the state, both urban and rural.Beyond its initial goals, it has expanded to impact not only the lives of the women it assists but also those of their families, communities, and cultures.The mission of Mission Shakti is to enable women to engage in profitable enterprises by giving them access to loans and market connections.The government's main scheme, Mission Shakti, empowers women through WSHGs.It is intended that as time goes on, an increasing number of women will join WSHGs.In all of the state's blocks and urban local bodies, over 70 lakh women have been organised into 6 lakh groups thus far.Constant handholding and monitoring are done throughout the year to support the work of the current WSHGs and provide new WSHG formations impetus.In order to do this, the State government established a distinct Mission Shakti directorate inside the Women & Child Development Department in 2018.A distinct Department of Mission Shakti was established in 2021.  Vision Mission Shakti wants to see Odisha become a genderneutral, equal opportunity state where women are empowered to live with dignity and succeed economically.  Mission Statement Contribute to the creation of a society that is independent, aware of socioeconomic issues, cooperative, has a spirit of women's empowerment to pursue their chosen activities without interference or dependence, fosters leadership development while upholding gender equity, and, above all, values diversity and mutual respect for the welfare of society as a whole.  Objectives of Odisha Mission Shakti Programme: Through women's self-help groups, Odisha Mission Shakti seeks to financially empower women.This project provides a variety of programmes that address livelihood skill development, market connections, financial inclusion, institution building, and capacity building.A self-help group that receives funding can apply for Mission Shakti loans, revolving funds, and seed money.The Mission Shakti loan has no interest.This programme will help Self-Help Groups become self-sufficient, which will in turn empower women.In addition, the Mission Shakti initiative would enable women to attain self-sufficiency, a sustainable means of subsistence, and financial competence.Connecting women's self-help groups with banks. To support groups for self-help. To offer them credit connections, market support, and technical assistance as needed. Offering knowledge and instruction in the management of women's  Support Groups for Oneself. To market and promote the women's self-help groups' products.  Benefits and Features of the Programme:  On March 8, 2001, the Odisha government introduced the Odisha Mission Shakti programme. By means of this programme, women in the state will receive financial assistance through their participation in federations and Self-Help Groups. This plan will establish a connection between the women and the financial institutions. In addition, the Self-Help Group will give women institutional credit, allowing them to work for themselves. A variety of financial support options, including revolving funds, Mission Shakti loans, and seed money, will be offered under this initiative. Additionally, the government will form strategic alliances with the banking industry to guarantee hassle-free banking services to Self-Help Group members on a variety of levels. A distinct Department of Mission Shakti has been established for the appropriate monitoring and implementation of this project. This scheme will also empower women economically. In order to inform women about the government's financial inclusion measures, the government will also conduct credit counselling and financial literacy programmes at the village level. V. CONCLUSION Women empowerment through Self-Help Groups has proven to be an effective strategy in fostering financial independence, social cohesion and personal growth.By providing women with a platform to support each other, gain skills, access resources and start business, Self-Help Groups contribute to their economic, social, and psychological empowerment.Ultimately, the success of Self-Help Groups in empowering women underscores the importance communitydriven initiatives in addressing gender inequality and promoting sustainable development. introduced SHGs to India in 1986-1987.The largest microfinance initiative in the world was established by NABARD in 1992 and is called the SHG Bank Linkage project.SHGs were able to open savings bank accounts in banks starting in 1993 because to the efforts of NABARD and the Reserve Bank of India.The Indian government launched the Swarna Jayanti Gram Swarozgar Yojana in 1999 with the goal of fostering self-employment in rural areas by creating and enhancing the skills of these groups.The National Rural Livelihood Mission was developed from this in 2011. To support and promote women's social, technical, and economic advancement. To strengthen and provide for them monetarily.
2024-04-11T15:03:56.026Z
2024-04-09T00:00:00.000
{ "year": 2024, "sha1": "986dea8fb10c5c48026b1cc9458270c24f5eecb1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.38124/ijisrt/ijisrt24mar1665", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "684ccfbdb6fc8c24f40dd7791d6cd416756c1e74", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
53412450
pes2o/s2orc
v3-fos-license
Developments in Modelling Positive Displacement Screw Machines It has been estimated that almost 20% of the world’s electricity consumption is used for gas compression and pumping. For example, in developed countries, more than 25% of the electrical power output during the summer months is used for the compression of refrigerants in air-conditioning systems. For most industrial compression and pumping applications, machines of the positive displacement type are used and, due to their technological advantages over other types, approximately 85% of industrial compressors now made, are of the twin screw type. Although these are used for a variety of applications, such as compressors, expanders, blowers, vacuum pumps and liquid and multiphase pumps, the most common use of such machines is for industrial refrigeration, air conditioning and process gas compression. Depending on the application screw compressors may operate flooded by oil or another fluid or without any form of internal rotor cooling or lubrication. Typical examples of a disassembled oil injected screw compressor and an assembled dry compressor are presented in Fig. 1. Introduction It has been estimated that almost 20% of the world's electricity consumption is used for gas compression and pumping. For example, in developed countries, more than 25% of the electrical power output during the summer months is used for the compression of refrigerants in air-conditioning systems. For most industrial compression and pumping applications, machines of the positive displacement type are used and, due to their technological advantages over other types, approximately 85% of industrial compressors now made, are of the twin screw type. Although these are used for a variety of applications, such as compressors, expanders, blowers, vacuum pumps and liquid and multiphase pumps, the most common use of such machines is for industrial refrigeration, air conditioning and process gas compression. Depending on the application screw compressors may operate flooded by oil or another fluid or without any form of internal rotor cooling or lubrication. Typical examples of a disassembled oil injected screw compressor and an assembled dry compressor are presented in Fig. 1. The performance of such compressors largely depends on the rotor geometry which may vary, depending on the number of lobes in each rotor, the basic rotor profile and the relative proportions of each rotor lobe segment. Improvements, to maximise the efficiency, robustness and appearance of these machines are the imperative for every manufacturer in order to be competitive in the market and at the same time to offer a more environmentally friendly product. Initially, the assumptions on which screw compressors were designed, were that an ideal gas being compressed in a leak proof working chamber by a process, which could reasonably be approximated in terms of pressure-volume changes by the choice of a suitable value of the exponent "n" in the relationship pV n = Constant. The advent of digital computing made it possible to model the compression process more accurately and, with the passage of time, ever more detailed models of the internal flow processes have been developed, based on the assumption of one-dimensional non-steady bulk fluid flow and steady one-dimensional leakage flow through the working chamber. By this means and the selection of suitable flow coefficients through the passages, and an equation of state for the working fluid, it was thus possible to develop a set of non-linear differential equations which describe the instantaneous rates of heat and fluid flow and work across the boundaries of the compressor system. These equations can be solved numerically to estimate pressure-volume changes through the suction, compression and delivery stages and hence determine the net torque, power input and fluid flow, together with the isentropic and volumetric efficiencies in a compressor. In addition, the effects of oil injection on performance can be assessed by assuming that any oil passes through the machine as a uniformly distributed spray with an assumed mean droplet diameter. Such models have been refined by comparing performance predictions, derived from them, with experimentally derived data. A typical result of such modelling is the suite of computer programs described by Stosic et al, 2005. Similar work was also carried out by many other authors such as Fleming andTang 1998 andSauls, 1998. Due to their speed and relatively accurate results, such mathematical models are often used in industry. However, these neglect some important flow effects that influence compressor performance, mainly in the suction and discharge ports. 3D CFD in screw compressors Screw compressor performance can be estimated more precisely by use of Threedimensional Computational Fluid Dynamics (CFD) or Computational Continuum Mechanics (CCM). Computational fluid dynamics (CFD) covers a broad area, which attracted the interest of many investigators at the beginning of the computer era. It is based on the numerical simulation of the conservation laws of mass, momentum and energy, derived for a given quantity of matter or control mass. Three main groups of methods have been developed through the years as described by Ferziger and Perić, 1995. The finite volume method is most commonly used in CFD. In the analysis of screw compressors, the aspect of most concern is in unsteady flow calculation with moving boundaries. The usual practice for the analysis of solid body deformation or fluid-structure interaction is by coupling Finite Volume (FV) code with Finite Element (FE) solvers using a specially designed interface. Most CFD and FE vendors use that procedure to take advantages of both FV and FE. Although well established, this procedure in many situations may not be entirely suitable for calculation. One important example of this is that of conjugate heat transfer, where heat transfer in both www.intechopen.com the solid and the fluid has to be calculated simultaneously. This is required to estimate the interaction of fluid flow and solid deformation in a screw machine which may be estimated by use of Computational Continuum Mechanics (CCM). In this case, small deformations of solid casing are caused by the large pressure and temperature gradients within the flow domains. Although these deformations are relatively small, they are of the similar order of magnitude as the compressor clearance, and may thus significantly change the flow within the machine, as described by Kovacevic et all, 2003. A number of commercial CFD software packages are currently available which may be able to cope with the complexity of flow through screw machines and may be integrated with CAD. However developed these codes are, there are still limitations in their use for some specific applications. For the analysis of screw machines, a moving, stretching and sliding mesh has to be produced to map the working chamber of a machine. Today's commercial grid generators are still not capable of coping with these demands. Despite a significant number of papers published in the area of computational fluid dynamics, only a few deal with the application of computational fluid dynamics to screw compressors. All of them are recent papers which started with Stošić et al in 1996. That paper describes the principles of three dimensional numerical modelling applied to positive displacement screw machines. However, this work was not fully successful due to the relatively limited grid generation method. Kovačević, Stošić and Smith published a number of papers between 1999 and 2001. These papers set up the scene for the commercial use of 3-D numerical analysis in the screw compressor world. In later years, the authors published a series of papers related to both, grid generation in screw compressors and 3D numerical performance estimation, as described by Kovacevic et al, 2003 and2005. These include fluid solid interaction in screw machines, Kovacevic et al, 2004. The breakthrough was made when an analytical transfinite interpolation method with adaptive meshing was used to develop an automatic numerical mapping method for any arbitrary screw compressor geometry. It is explained in detail by Kovacevic, 2005. This was later regularly used for the analysis of processes in screw compressors by means of an interface program called SCORG (Screw COmpressor Rotor Geometry Grid generator). This suite enables numerical mapping of both, moving and stationary parts and direct integration with a commercial CFD or CCM code. Although mainly used for CFD in screw machines, the same concept may be used for a variety of other applications. An example of this is the grid generation of the flow paths in a rotary heat exchanger, described by Alagic et all, 2005. A recent monograph on CFD in screw machines, by Kovacevic et al, 2006 gives a comprehensive overview of the methods and tools used. These methods are applicable to all major commercial CFD software packages, capable of coping with complex flows, and can be integrated with variety of CAD systems, as shown by Kovacevic et al, 2007. A typical arrangement of a numerical mesh for CFD calculation of flow in screw compressors is shown in Fig. 2. The moving parts of the flow domain are mapped with a hexahedral block structured mesh while the remaining stationary parts are replaced by the unstructured polyhedral mesh produced by a commercial grid generator directly from the CAD system. The first experimental verification of numerical results was performed and reported by Kovacevic et all, 2002. This study was performed on an oil injected screw compressor with 'N' type rotors with a 5/6 lobe configuration and a 128 mm male rotor outer diameter. The numerical mesh contained a very moderate number of just over half of a million of cells of which around 200,000 were used to map the moving parts of the grid. The converged solution obtained on an office PC was achieved with 120 time steps in approximately 30 hours of computing time. The results were compared with measurements made on the identical screw air compressor. Four piezo-resistive transducers were positioned in the housing to measure pressure fluctuations across the compressor. Fig. 2. Numerical mesh for CFD calculation of screw compressor The results obtained were compared for discharge pressures of 6, 7, 8 and 9 bar respectively. Good agreement was obtained both for the integral parameters and the instantaneous pressure values, as shown in Fig. 3. The report also discussed the effects of various factors on the calculation accuracy. These included variations in mesh sizes, turbulence models, differencing schemes and many other factors. It was concluded that these changes do not affect the overall calculation results which were reasonably accurate and due to that it was recommended that the method can be applied in industry. However it was also shown that the use of alternative differencing schemes and turbulence methods significantly influence local velocity and pressure values at particular regions of the machine. Although these local values have a low impact on the overall performance, their influence on flow development had to be further investigated. Very few authors have analysed local effects in screw compressors. For example Vimmr, 2006, following on Kauder et al, 2000, analysed flow through a static mesh of the single leakage flow path at the tip of the male rotor to conclude that rotor relative velocity does not affect flow velocities significantly and that neither of the turbulence models they used significantly change the outcome of modelling. That was in agreement with the findings of Kovacevic et al , 2006, but also confirmed that the need for further validation of full 3D CFD results could not be obtained by simplified numerical or experimental analysis. Instead, a full understanding of the local velocities in the suction, compression and discharge chambers of the machine was needed to further validate the existing methods and to develop additional models, if needed. The following sections include the validation of CFD calculation by the use of Laser Doppler Velocimetry (LDV). Additionally, several examples of the use of CFD for the analysis of different types of screw machines are presented to illustrate the opportunities for the use of CFD both in industry and academia. LDV flow measurements in a screw compressor Flow in a screw compressor is complex, three-dimensional and strongly time dependent, similar to that in cylinder flows of gasoline and diesel engines, centrifugal pumps or in turbochargers. This implies that the measuring instrumentation must be robust to withstand the unsteady aerodynamic forces and oil drag, must have a high spatial and temporal resolution and, most importantly, must not disturb the flow. Point optical diagnostics, like Laser Doppler Velocimetry Durst, 2000, Albrecht et all, 2003, Drain, 1986, can fulfil these requirements, as described by Nouri et all, 2006. In order to measure flow velocities inside a screw compressor, a test facility was set up at City University, where this technique was used. Extensive measurements were taken of velocities in the compression domain and in the discharge chamber of the test air screw compressor as discussed by . A transparent window, for optical access into the rotor chamber of the test compressor, was machined from acrylic to the exact internal profile of the rotor casing and was positioned on the pressure side of the compressor near the discharge port, as shown in Fig. 4. After machining, the internal and external surfaces of the window were fully polished to allow optical access. This was obtained to the discharge chamber through a transparent plate, 20 mm thick, installed on the upper part of the exhaust pipe. The optical compressor was then installed in a standard laboratory air compressor test rig, modified to accommodate the transmitting and collecting optics and their traverses, as shown to the right of Fig. 4. The laser Doppler Velocimeter operated in a dual-beam near backscatter mode. It comprised a 700 mW argon-Ion laser, a diffraction-grating unit, to divide the light beam into two and provide frequency shift, and collimating and focusing lenses to form the control volume. A Fibre optic cable was used to direct the laser beam from the laser to the transmitting optics, and a mirror was used to direct the beams from the transmitting optics into the compressor through one of the transparent windows. The collecting optics were positioned around 25 of the rotor chamber and 15 of the discharge chamber to the full backscatter position and comprised collimating and focusing lenses, a 100 m pin hole and a photomultiplier equipped with an amplifier. Although the crossing region of the laser beams is an ellipsoid more than 0.5 mm long, the size of the pinhole defines the effective length of the measuring volume so that it can be represented as a cylinder 100 high and 79 µm in diameter. The fringe spacing is 4.33 µm. The signal from the photomultiplier was processed by a TSI processor interfaced to a PC and led to angleaveraged values of the mean and RMS velocities. In order to synchronise the velocity measurements with respect to the location of the rotors, a shaft encoder that provides one pulse per revolution and 3600 train pulses, with an angular resolution of 0.1, was used, fixed at the end of the driving shaft. Instantaneous velocity measurements were made over thousands of shaft rotations to provide a sufficient number of samples. In the present study the average sample density was 1350 data per shaft degree. Since the TSI software is provided by 4 external channels, one of them was used to collect the pressure signal coming from the high data rate pressure transducer via an amplifier. Flow measurements within the compression chamber Two coordinate systems were defined within the rotor chamber of the compressor, as shown in Fig. 5 (a), (b) and (c). Each of them was applied to one of the rotors where α p and R p are, respectively, the angular and radial position of the control volume and H p is the distance from the discharge port centre. Taking the appropriate coordinate system, measurements were obtained at R p =48, 56, 63.2mm, α p =27º and H p =20 mm for male rotor, and at R p =42, 46, 50 mm, α p =27º and H p =20 for female rotor. Typical velocity values measured in the working chamber are shown in Fig. 6. Three zones were identified in the working chamber near the discharge port. Zone (1) covers most of the main trapped working domain with fairly uniform velocities. Zone (2) is associated with the opening of the discharge port. The velocities and turbulence in this zone are much higher then in Zone (1). In this zone the flow is driven by the pressure difference between the rotors and the discharge chamber. This is especially visible in the case shown, since the pressure in the discharge system was maintained virtually at atmospheric conditions. Zone (3) is associated with the leakage flows between the rotors and the casing, where velocities increase to values higher then in Zone (1) but are not as chaotic as in Zone (2). Conclusions derived from the measurements, as explained in more detail in Gueratto et all, 2007, are as follows: (1) Chamber-to-chamber velocity variations were up to 10% more pronounced near the leading edge of the rotor. (2) The mean axial flow within the working chamber decreases from the trailing to the leading edge with velocity values up to 1.75 times larger than the rotor surface velocity near the trailing edge region (3). The effect of opening of the discharge port on velocities is significant near the leading edge of the rotors and causes a complex and unstable flow with very steep velocity gradients. The highest impact of the port opening on the flow is experienced near the tip of the rotor, with values decreasing towards the rotor root. Fig. 7 (a) shows a schematic arrangement of the discharge chamber divided into the discharge port domain and the discharge cavity. Fig. 7 (b) and (c) show the measurement locations for two characteristic cross sections called the W and V sections. The coordinate system, drawn in all of the sketches in Fig. 7, identifies the location of the measured control volume (CV). Measurements were made at Xp=5.5mm, Zp =13mm and Yp = -8 to 13mm. (1) Velocities are higher than in the compression chamber due to fluid expansion in the port between sections W and V. (2) The axial velocity distribution within the discharge chamber is strongly correlated to the rotor angular position since the rotors periodically cover and expose the discharge port through which, at some point, more then one working chamber is connected. The left diagram in Fig. 9 shows the case when only one compression chamber is connected to the discharge port and the flow is relatively stable. This corresponds with the domains to the left of the port opening line in diagrams in Fig. 8. As another chamber with high pressure flow connects to the discharge chamber, in the right diagram in Fig. 9, jet like flows near sides of the discharge chamber passage occur. These are rendered with high velocities in the domain to the right of the thick port opening line in Fig. 8. (3) The jet flows create velocity peaks that make the flow in that region highly turbulent. Validation of CFD results by LDV measurements The numerical mesh used for CFD calculation and comparison with the measured data, obtained with the LDV technique, is shown in Fig. 2. The flow paths around the rotating parts of the machine are generated using the in-house software SCORG. The pre-processing script generated in SCORG is used to connect these with the stationary numerical mesh of the compressor ports, generated directly from the CAD system, and to transfer the entire case to the CFD solver. Fig. 2 shows the mid sized mesh consisting of 935,000 numerical cells. For the purpose of obtaining a grid independent solution, three different meshes were generated, the smallest consisting of 600,000 numerical cells and the biggest with 2.7 million cells, which was the largest possible case that could be calculated by the single processor of the computer that was used. www.intechopen.com Compression Chamber Due to space limitations, this chapter only compares the CFD results extracted from the middle size model with the LDV results. The compressor working conditions and the position of the CV are identical to the LDV measurements. Fig. 10 shows a comparison of the axial mean velocities in the compression chamber close to the discharge port. This Fig. shows very good agreement throughout Zone (1) and Zone (2), as defined in Fig. 6. In Zone (3) both the measured and calculated velocities increase but the increase in velocities obtained from CFD is larger as a consequence of the inability of the used k- model of turbulence to cope with flows near walls in rather large numerical cells. Such a configuration of the numerical mesh is the result of the methodology used for generating and moving the numerical mesh, as explained in more detail by Guerrato et all, 2007. Fig. 11 shows a comparison of the axial velocities in the discharge port. The differences appear to be rather large at locations where the velocities are measured, although the trends and mean values are similar. It is confirmed by calculation that the highest values of the axial velocity are in the middle section through the discharge port, which corresponds to the period of time when only one working chamber is connected to the discharge chamber. On both the male rotor side of the discharge port, as shown in the top diagram of Fig. 11, and on the female rotor side of the port, shown in the bottom diagram, the velocities during that process decrease towards walls. However during the phase when another working chamber is connected to the discharge port, the velocities near the walls increase due to the jet like flows induced by higher pressure differences on the outside of the rotors. The measurements confirm that turbulence plays a significant role in the narrow passage which connects the compression chamber with the large discharge domain. This is most probably the reason why the CFD results do not replicate the measured values more exactly. Therefore further research in the turbulence models for internal flow in the compressor ports is suggested. Favourably, the flow on both sides of that region appears not to be so turbulent. Due to that fact and because the internal energy in positive displacement machines is significantly larger than the kinetic energy, this does not greatly affect the overall estimation of performance. Despite this, further development and improvement of 3D CFD codes are needed. Combined Chamber and 3D CFD MODELS A prerequisite for success in the highly competitive market of screw machines is the ability to design, analyse and produce machines quickly. 3D CFD claculation, although accurate, may take significant time to achieve a desired outcome. Compressor manufacturers are therefore interested in faster but still accurate calculations in order to optimise and improve parts of their machines. Fig. 13. Schematic representation of a compressor working process Such a goal can be achieved by use of combined mathematical models in calculation. The idea behind coupled models is that the components of the system of greater interest are modelled with full 3-D simulations while the components of secondary concern are simulated with a thermodynamic model. This can combine the advantages of the fast computation and high flexibility of the chamber model described by Stosic et all, 2005, with the enhanced capabilities of the 3-D model, described in detail by Kovacevic et all, 2006. The entire positive displacement compressor can be generally represented by three flow domains which complete a working cycle of the machine. As shown in Fig. 13 the compression chamber in which the compression process occurs is connected to the suction and discharge chambers through which the compressor communicates with the environment. Thermodynamic properties which describe the state of the working fluid in these chambers are namely, the volume of the chamber, the pressure, temperature and the density. Internal energy changes in positive displacement machines are significantly larger than kinetic energy. In this case the compression process can be quite accurately described by a set of differential equations for internal energy and mass conservation, such as those of Stosic et al, 2005. These must be closed with equations that define the leakage flows, liquid injection and heat exchange as well as the mass and energy contributions from the inlet and outlet flows in order to be able to solve them. An equation of state must also be included to establish the relationship of the working fluid thermodynamic properties such as pressure, temperature and volume. The inlet and outlet flows are the means by which a compressor chamber exchanges energy and mass with its surroundings. These occur through openings which generally change both in size and shape with time. More than one of them can be connected to the compression chamber at the same time. In the mathematical models mentioned earlier, these flows are introduced through the enthalpy and mass contributions to the fluid in the compression chamber. If all three chambers are simulated by a quasi one dimensional thermodynamic model, as described by Stosic et al, 2005, both the mass and energy flow estimates are based on the assumption of adiabatic flow through the suction and discharge cavities. However, if a one dimensional model of the compression chamber needs to be integrated with a three dimensional model of the suction and discharge chambers the exchange of mass and energy must be calculated by the summation of the boundary flows that occur in the three dimensional domains. Once the integrated flows are added to those of the onedimensional model in the compression chamber, they can be used to calculate the thermodynamic properties in that chamber in the form of pressure, temperature and density. The solution of the one dimensional model will then be obtained by integration of the differential equations, typically, using the Runge-Kutta 4 th order method, or its equivalent, as described by Stosic et all, 2005, Mujic et al, 2008and Mujic, 2009. The derived values of pressure and temperature in the compression chamber are used later as boundary conditions for the three dimensional models of the suction and discharge chambers. An example of the interface where these boundary conditions are applied is given in Fig. 14. Fluid solid interaction Fluid and solid interaction in screw compressors was investigated for three common applications of screw compressors, namely an oil-injected air compressor of moderate pressure ratio, a dry air compressor, of low pressure ratio, and a high pressure oil flooded compressor. In all cases the rotors are of the 'N' type with a 5/6 lobe configuration. The rotor deformations in the ordinary oil injected screw compressor are shown in Fig. 15. These cause an increase in the clearance gap between the rotors. However these deformations are of an order of magnitude smaller than the rotor clearances. In order to make the results visible, the deformations in Fig. 15 are enlarged 20,000 times. Fig. 15. Rotor deformations in the oil injected compressor In the oil free air compressor, due to the lack of cooling, the air temperature rise is much higher. For a 3 bar discharge pressure, the exit temperature has an average value of 180 o C. The deformations of the rotor are presented on the left of Fig. 16. The fluid temperature in the immediate vicinity of the solid boundary changes rapidly, as shown in the right diagram of the same Fig.. However, the temperature of the rotor pair is lower, due to the continuous averaging oscillations of pressure and temperature in the surrounding fluid. This is shown in the right diagram of Fig. 16, where the temperature distribution is given in cross section for both the fluid flow and the rotor body. The deformation, presented in the Fig., is increased 5,000 times in order to make it visible. The rotor deformation has the same order of magnitude as the rotor clearance. The high pressure oil injected application was taken as a CO 2 refrigeration compressor with suction conditions of 30 bar and 0 o C and discharge conditions of 90 bar and 40 o C. In this case, the large pressure difference was the main cause of the rotor deflection, with the highest deformation in excess of 15m, as shown in Fig. 17. The deformation pattern of the rotors is similar to the low pressure case but with slight enlargement at discharge. Use of CFD for Noise Prediction Identification of sources of noise in screw compressors and its attenuation becomes an important issue for the majority of applications. Pressure fluctuations in the discharge port affect not only the aero acoustics in that domain but also the mechanically generated noise due to rotor rattling. It is believed that adequate porting can decrease the level of noise and increase the performance of the machine. Fig. 19. Pressure oscillations in the 3d CFD model A chamber model was first used to estimate the pressure oscillations as a function of the shape of the port and the cross sectional area of the connecting flange. These predictions can estimate the main harmonics of generated noise relatively accurately. However, this model does not take into account the shape of the discharge chamber which may play an important role in generating higher harmonics. Further steps were therefore undertaken to analyse pressure fluctuations in the discharge port by a full 3D CFD code. The results obtained by this model agree well with measurements as shown by Mujic, 2009 but the model is inadequate for everyday industrial use. Therefore a combined model was developed which combines the accuracy of a full 3D model and the speed of a chamber model. A comparison of the results provided by the three numerical models is shown in Fig. 20. The 3-D and coupled models offer better accuracy than the thermodynamic model, especially for higher harmonics of the gas pulsations. The discharge chamber geometry certainly does influence the gas pulsations and therefore the accuracy of the prediction is thereby improved in the case of the 3-D and coupled models. Additionally, since the thermodynamic model assumes uniform distribution of fluid properties across the control volume the value used for comparison of the results is that obtained at the centre of the discharge chamber. In the case of the 3-D model, the compared pressure values are those taken at the identical position of the pressure probe in the real chamber. As both the 3-D and the coupled models include the momentum equation, they can account for pressure wave propagation through the discharge chamber. The pressure wave passes the measuring place and influences the value of the pressure at that place. Additionally, these two models provide information about other flow properties, such as the velocities within the chamber, and can be useful in the analysis of fluid flow losses. The accuracy and computational time for obtaining a solution with each numerical model are shown in Fig. 21. The chamber model requires modest computer resources and its computational time is much shorter than that required for 3-D computations. The accuracy of the 3-D and coupled models is better than that of the simple chamber model. The coupled model requires one order of magnitude less computational time than the full 3-D model, for negligible loss in accuracy. Cavitation in gear pumps The assembly, the functionality and the numerical mesh of a gear pump are presented in Fig. 22. During operation, damage due to cavitation and erosion occurs at the rotor shafts and in the gaps. The work presented here is the property of CFX Berlin. Steinman, 2006 outlined that the main challenges in this computation were the relatively complex geometry, the relative moving and deforming grids and transient interfaces and cavitation. The hexahedral numerical mesh of moving parts was generated by SCORG while the stationary parts were meshed by ANSYS CFX and ICEM tools into a tetrahedral mesh. These two domains were connected through transient interfaces (GGI). CFD analysis of a multiphase screw pump Multiphase screw pumps are regularly used in the oil and gas industry. The CFD analysis of the leakage flow and pressure distribution in these pumps has been performed by the use of Star CCM+ software. As an example, the pressure distribution on the first layer of cells of multiphase pump passage flow for a 1-10 bar pressure rise is presented in Fig. 23. . Fig. 23. Pressure distribution on a multiphase oil/natural gas pump and the leakage flow through blow hole area www.intechopen.com The leakage flow through the clearances and the blow hole area is shown in the right Fig.23. The numerical grid is obtained by an in-house grid generator originally developed for the analysis of screw compressors. The pressure distribution for alternative multirotor applications is shown in Fig. 24. Fig. 24. Pressure field in hydraulic motors with 2 female rotors (left) and three female rotors (right) As shown in Fig. 24, the machine with three female rotors has a pressure drop between the interlobes that is smaller than in the case of the pump with two female rotors. This allows for smaller leakage flow to be achieved in the machine with more female rotors. However, integration of forces over rotor surfaces gives almost exactly same load on the rotors. Conclusions Various levels of mathematical modelling are used in practice for the performance estimation of screw compressors. Industry mostly uses either empirically fittied or chamber models while researchers tend to use 3D CFD. Extensive study has been performed to validate the accuracy of the 3D CFD results. Laser Dopler Velocimetry (LDV) is used to measure the fluid mean velocity distribution and the corresponding turbulence fluctuations at various cross-sections in the screw compressor. A comparison of measurements and predictions has allowed validation and further development of the CFD package to a stage which will render it possible to design future screw compressors without the need for expensive and time-consuming experiments and 'tuning' of computational models. It appeared evident from LDV measurements that some effects of screw compressor flow are not always well captured by the existing turbulence models. Initial investigation of this problem indicated that a more suitable turbulence model, capable of analyzing flows in the sliding and stretching domains of a screw compressor, may need to be found or developed and validated. Integrating 3D CFD flow analysis in the suction and discharge chambers with a chamber model in the compression chamber offers the possibility for faster and more accurate analysis when optimising compressor port designs. This is particularly well suited for industrial optimisation. The results obtained are very encouraging but further improvements still may be required. Four test cases were presented in this publication. They demonstrate the capabilities, accuracy and scope of application of the developed tool. Mathematical modelling is always a simulation of reality and not reality itself. This publication gives guidelines for the use of existing modelling principles for the performance prediction of screw machines and outlines possibilities for further improvements. Acknowledgements The EPSRC project 'Experimental and Theoretical Investigation of Fluid Flow in Screw Compressors' was performed at City University London between 2006. It was jointly funded by EPSRC, The Trane Company and Lotus Engineering. A part of this paper presents the LDV measurements done by Mr Diego Gueratto for this project. The authors wish to thank all participants in this project for their contribution. The work on cavitation in gear pumps was performed by CFX Berlin. The authors wish to thank Dr A. Steinman for the permission to refer to it in this paper.
2018-10-31T14:25:23.541Z
2010-11-02T00:00:00.000
{ "year": 2010, "sha1": "ca7559e509e3d66f8b032e804906224a8f59ffd6", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/12363", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f464f4d6b25d812bc9bbc3c3704a1391eadc82d3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
114417864
pes2o/s2orc
v3-fos-license
Nonlinear Progressive Collapse Analysis Including Distributed Plasticity This paper demonstrates the effect of incorporating distributed plasticity in nonlinear analytical models used to assess the potential for progressive collapse of steel framed regular building structures. Emphasis on this paper is on the deformation response under the notionally removed column, in a typical Alternate Path (AP) method. The AP method employed in this paper is based on the provisions of the Unified Facilities Criteria – Design of Buildings to Resist Progressive Collapse, developed and updated by the U.S. Department of Defense [1]. The AP method is often used for to assess the potential for progressive collapse of building structures that fall under Occupancy Category III or IV. A case study steel building is used to examine the effect of incorporating distributed plasticity, where moment frames were used on perimeter as well as the interior of the three dimensional structural system. It is concluded that the use of moment resisting frames within the structural system will enhance resistance to progressive collapse through ductile deformation response and that it is conserative to ignore the effects of distributed plasticity in determining peak displacement response under the notionally removed column. Introduction Progressive collapse of building structures is not covered extensively in international building codes and design standards. The U.S. Department of Defense (DoD) publishes UFC 4-023-03, a guide for progressive collapse resistant design of buildings structures [1]. A building structure is assessed for progressive collapse potential, after it is designed in accordance with the applicable building code. The structural system is assessed in terms of its capacity to transfer loads in ductile manner, after the loss of primary load-carrying members. Framed structures are assessed by quantifying the consequences of the notional removed of critical corner, interior, and exterior columns. Loss of corner columns is particularly critical because its leads to long unsupported spans [2]. The current UFC guide [1] embraces the design philosophy that that progressive collapse resistant steel structures are ductile systems, similar in some ways to earthquake resistant structural systems, particularly when certain types of moment resisting connections are used [3]. Several researchers suggested that lessons learned from decades of studying structural performance during earthquakes that aim at producing ductile structural systems could guide engineers in producing progressive collapse resistant systems [4]. UFC defines the required strength for each structural component as force-controlled or deformation-controlled, based on the forcedeformation relationship for the required strength. This paper outlines the application of the AP method to assess the capacity of steel structural systems to transmit gravity load, following the loss of a primary load carrying member in a framed steel structural system using nonlinear static analysis procedures. Dynamic effects associated with the sudden removal of column are incorporated using load increase factors. Linear analysis procedures are the dominant methods in professional design offices for a large class of structures, with the exception of geometric nonlinearities in steel structures that must be rigorously investigated. The application of the AP method using Linear Static Analysis (LSA) procedures is permitted for structures that do not contain significant irregularities or if the demand-to-capacity ratio (DCR) is less than 2.0. The computationally more expensive Nonlinear Static Analysis (NSA) is permitted without limitations on DCR or structural irregularities. The use of LSA and NSA with the AP method is permitted by UFC [1] and is justified in two ways: 1) The computational cost of LSA and NSA is lower than dynamic analysis, 2) progressive collapse is hazard independent and involves significant uncertainties that justify using approximate methods such as LSA and NSA. This paper is based upon Mohamed et al. [5], which examines the use of LSA procedures for AP investigations. The current paper employs NSA to assess the potential for progressive collapse of steel framed structures and addressed the effect of ignoring distributed plasticity on deformation response. Nonlinear static analysis procedure and the alternate path method The alternate path method is one of the most commonly used procedures to assess the ability of the structure to transmit gravity loads safely to competent foundation. The requirement to use the AP method depends on the building Occupancy Category (OC). AP method is typically required for OC III and OC IV, and in some cases for OC II. OCs are defined in UFC 3-301-01 Structural Engineering [6]. Occupancy categories defined in UFC 3-301-01are similar to the Risk Categories currently used in ASCE 7 -10 [7]. Gravity load for assessment of progressive collapse potential Progressive collapse is a gravity driven extreme event. In order to assess the vulnerability of a building structure to progressive collapse, alternate path analysis is conducted after the notional removal of selected columns [1]. Prescribed gravity loads are applied and force-controlled and deformation-controlled actions are determined. These actions are compared to certain acceptance criteria, which may be incorporated in the analysis software. Deformation-controlled and force-controlled actions on the structural system obtained using NSA procedure are calculated by applying amplified gravity loads, given by Eq. 1 to the floor panels adjacent to the notionally removed column at all floor levels [1,5]. where, Eq. 2 represents the gravity loads on floor panels that are not adjacent to notionally removed columns, and used to determine either deformation-controlled or forcecontrolled actions, as appropriate. The gravity loads in Eq. 1 and Eq. 2 are applied simultaneously at all floor levels. The gravity load for LSA procedures is similar to NSA, except that a Load Increase Factor (LIF) is used instead of the dynamic increase factor indicated in Eq. 1. Progressive collapse assessment using the alternate path method and LSA procedures as well as the process for determination of LIF are discussed elsewhere [1,8]. The discussion in this paper is limited to deformationcontrolled actions. Dynamic increase factor In this study, the AP method was applied using nonlinear static analyses, where the dynamic effects of load removal are approximated through the use of dynamic increase factors (DIF) as shown in Eq. 1. Acceptance criteria For deformation-controlled actions calculated using NSA, the acceptance is not based on component strength, but rather on deformation limit controlled by acceptance criteria [1]. The post-yield behaviour is simulated in this paper by inserting concentrated plastic hinges on steel frame objects. The hinge model used in this study to conduct NSA is shown in Fig. 1. The plastic hinge parameters and acceptance criteria for selected shapes are shown in the table below [9]. The w-shapes shown below represent some of the design sections for the case study discussed in a subsequent section of this paper. Progressive collapse is an extreme event; therefore, the acceptance criteria used for NSA and applied to deformations is based on collapse prevention, as recommended by [1]. NSA conducted in this paper for progressive collapse assessment, assumes plastic hinges described in Fig. 1 is assigned to frame members. The plastic hinge length is as per FEMA 356 [9]. Case study In order to evaluate the effect considering or ignoring distributed plasticity when conducting nonlinear static analysis during AP investigation, the case study developed by Mohamed and Tarmoom [5] is adopted. The building structure described by Fig. 2 and Fig. 3 was analysed and designed. The floor system is a reinforced concrete slab on a steel deck. All secondary beams were designed as composite beams while moment resisting frames were designed without composite action. Ordinary moment resisting frames exist along the perimeter of the building and along grid lines A to H. The case study conservatively ignores the contribution of the deck to progressive collapse resistance. Consideration of composite action in assessment of progress collapse was discussed in the literature [10]. The member shown in Fig. 2 and Fig. 3 were sized for reducible live load including light weight partitions of 3.6 kN/m 2 . The weight of the concrete-filled steel deck was 2.79 kN/m 2 . Superimposed dead load was assumed to be 4 kN/m 2 , along with cladding load of 3.2 kN/m. The relatively high gravity load ensures progressive collapse is indeed gravity driven. Design wind speed was 161 km/hr. The steel framed structure was designated as OC III and was analysed and designed according to AISC LRFD specifications [11], using the Direct Analysis Method. The longest span of 10 meter was chosen so as to reflect the effect of long spans in exacerbating progressive collapse. Design was conducted using the computer program ETABS, developed by Computers and Structures Inc., USA. Three-dimensional models were used for analysis/design as well as progressive collapse investigations to avoid overly conservative structural response. It should be noted that ignoring threedimensional effects is not conservative when beams frame into exterior frames as they may induce considerable torsional effects, especially in corner columns as demonstrated by Mohamed [2]. Therefore, all interior moment frames along grid lines A to H were assumed to frame into perimeter frames along grid lines 1 and 6 using shear connections. This will theoretically eliminate torsional moments on perimeter frames. Mohamed and Tarmoum [5] applied the AP method to assess the potential for progressive collapse of the case study described in this paper. When column D2 was notionally removed, the maximum vertical deformation under this columns was 1342.60 mm, when using nonlinear static analysis using the load combinations in Eq. 1 and Eq. 2. All of the designed w-shaped sections in Fig.2 and Fig. 3 meet the requirements of section I2.2 of the AISC specifications [10] for prevention of local buckling. The requirements of section I2.3 [10] to control lateral torsional buckling are satisfied through the concrete-filled composite deck on top of the w-shaped beams. In order to understand the effect of distributed plasticity on the deformation response under notionally removed column, additional concentrated hinges are applied along vertical and horizontal frame elements. The results are summarized in the following subsections. Analysis Results for Notional Removal of Interior Column D2 -Three plastic hinges assigned at the begining and at the end of each horizontal and vertical element Mindful of the computational cost, only three plastic hinges are used at the end of each frame element to capture the effect of distributed plasticity on deformation response. When column D2 is notionally removed, the unsupported span after removal of column D2 reaches 20 meters. Analysis of the structural system with column D2 removed produced the deformed shape shown in Fig. 4 and the maximum vertical deformation response reaches 1934.6 mm. It is noted that yielding was limited to part of the floor adjacent to the notionally removed column D2 and didn't spread to other areas of floor. Load Increase Factor for Interior Column D2 -Five plastic hinges assigned at the begining and at the end of each horizontal and vertical element Mindful of the computational cost, number of plastic hinges is increased to three that are applied at the end of each frame element to capture the effect of distributed plasticity on deformation response. Analysis of the structural system with column D2 removed produced the deformed shape shown in Figure 5 and the maximum vertical deformation response reaches 955 mm. Analysis of the structural system with column D2 removed produced the deformed shape shown in Fig. 5. Similar to the case of 3-plastic hinges, yielding remained within the beams surrounding the notionally removed column, and damage is essential restricted to the floor area surrounding the notionally removed column. Fig. 5 shows additional plastic hinges yielding on all beams D1-D2 at each story level, compared to the case of fewer modelled plastic hinges shown in Fig. 4. Furthermore, additional hinges formed on beams D2-D3 at the upper four stories when 5-hinges are modelled as shown in Fig. 5, compared to Fig. 4. Figure 6 shows the members assessed for stress level based on DCR, for the model where five plastic hinges were used on each frame element. Only the two members connected to the notionally removed column failed in each floor along grid line D, namely, beams D1-D2 and D2-D3. This is because beam D1-D2 and D2-D3 became one 20-meter long beam. Summary The use of moment frames along the perimeter and interior of steel framed structure enhances the resistance to progressive collapse through limiting the response to the floor areas near the notionally removed columns. Moment resisting connections are more expensive to build compared to simple connections that are typically used with gravity systems. When notionally removed internal column is within properly design moment frame, ductile behavior may be expected during progressive collapse AP analysis, even with relatively long beams spans, similar to the ones examined in this study. Considering distributed plasticity using an acceptable method improves accuracy of responses obtained from AP progressive collapse analysis. It is conservative to ignore distributed plasticity in computing deformation responses. A simple approach to account for distributed plasticity that involves modelling several concentrated hinges at suitable locations may offer sufficient accuracy to predict the desired deformation-controlled behaviour. Computationally expensive models for distributed plasticity are not always necessary, particularly when considering the high level of uncertainty involved in progressive collapse assessment methods. The findings of this study are applicable to steel structures where the lateral force resisting system consist of moment frames only, without bracing system. Figure 6. Beams D1-D2 and D2-D3 exceeded DCR of 1.0 due to notional column removal when each frame element is modelled with five distributed plastic hinges.
2019-04-15T13:04:52.907Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "7d08354f2001e64aabe6c50bfad65750bf7cafa6", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/27/matecconf_iccmp2016_07003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "295a75e19a08e1398bfa442e5672494a9d87e427", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
16613250
pes2o/s2orc
v3-fos-license
Alterations in White Matter Evident Before the Onset of Psychosis Background Psychotic disorders are associated with widespread reductions in white matter (WM) integrity. However, the stage at which these abnormalities first appear and whether they are correlates of psychotic illness, as opposed to an increased vulnerability to psychosis, is unclear. We addressed these issues by using diffusion tensor imaging (DTI) to study subjects at ultra high risk (UHR) of psychosis before and after the onset of illness. Methods Thirty-two individuals at UHR for psychosis, 32 controls, and 15 patients with first-episode schizophrenia were studied using DTI. The UHR subjects and controls were re-scanned after 28 months. During this period, 8 UHR subjects had developed schizophrenia. Between-group differences in fractional anisotropy (FA) and diffusivity were evaluated cross sectionally and longitudinally using a nonparametric voxel-based analysis. Results At baseline, WM DTI properties were significantly different between the 3 groups (P < .001). Relative to controls, first-episode patients showed widespread reductions in FA and increases in diffusivity. DTI indices in the UHR group were intermediate relative to those in the other 2 groups. Longitudinal analysis revealed a significant group by time interaction in the left frontal WM (P < .001). In this region, there was a progressive reduction in FA in UHR subjects who developed psychosis that was not evident in UHR subjects who did not make a transition. Conclusions People at UHR for psychosis show alterations in WM qualitatively similar to, but less severe than, those in patients with schizophrenia. The onset of schizophrenia may be associated with a progressive reduction in the integrity of the frontal WM. Introduction Functional neuroimaging studies suggest that there are alterations in the functional connectivity between different brain regions in schizophrenia 1 and that these abnormalities are evident at the first episode of illness and in people at ultra high risk (UHR) of developing psychosis. 2,3 Similarly, structural neuroimaging studies have shown that there are global and regional changes in white matter (WM) volume in schizophrenia 4 and that these are also evident at the onset of psychosis 5 and in UHR subjects. 6 Approximately 20%-40% of people at UHR of psychosis develop psychosis within 24 months, 7,8 and this subgroup shows more marked reductions in gray matter volume in the inferior frontal 9,10 and medial temporal cortex 11 at presentation than UHR subjects who do not subsequently develop psychosis. Furthermore, repetition of scanning in these subjects indicates that some of these baseline abnormalities progress with the later transition to psychosis 10,12 and that these longitudinal magnetic resonance imaging (MRI) changes are not evident in UHR subjects who do not become psychotic. However, the extent to which these gray matter changes are related to alterations in WM integrity is unclear. Diffusion tensor imaging (DTI) can measure the direction and degree of water diffusion as well as its anisotropy (ie, its relative value along ordered structures such as axon bundles vs across them). Anisotropy (commonly reported as fractional anisotropy [FA]) can be altered by pathologic factors, such as demyelination and axonal membrane deterioration, and so is often used as an index of WM integrity. 13 The diffusivity along axons provides complementary information regarding WM structure. The axial (parallel) diffusivity corresponds to the amount of diffusion measured along the direction of principal diffusion, while the radial (perpendicular) diffusivity corresponds to the average diffusion in the perpendicular plane. Axial and radial diffusivity have recently been reported to change following axonal degeneration and demyelination in animal models 14 ; however, there is no evidence as to whether this holds true in humans. Several DTI studies have reported reduced WM integrity in chronic schizophrenia, 15 particularly in tracts connecting the frontal and temporal lobes, 15,16 as well in first-episode psychosis (FEP). [17][18][19] There have only been a few DTI studies in subjects at UHR of psychosis. These have been limited to cross sectional comparisons and have given inconsistent results. 20 One study reported a reduction of FA in the WM of frontal lobe, 21 while another found a reduction in the superior longitudinal fasciculus (SLF). 22 Peters et al 23 used tractography to assess the uncinate and arcuate fasciculi, cingulum bundle, and corpus callosum but did not find any differences between UHR and controls. Only one previous DTI study has subdivided UHR subjects in terms of their clinical outcome. UHR subjects who later became psychotic had lower FA at presentation than healthy controls in the WM of the left frontal lobe, in a region that includes the anterior thalamic radiation and the inferior frontooccipital fasciculus. 24 Compared with UHR subjects who did not become psychotic, they had lower FA in the WM lateral to the right putamen and in the left superior temporal gyrus but higher FA in left posterior temporal WM. 24 To date, there have been no longitudinal DTI studies in UHR subjects. Hence, the extent to which abnormalities in WM integrity at first presentation may progress with the subsequent onset of psychosis is unknown. The aim of the present study was to assess WM integrity in individuals at UHR for psychosis, using both cross sectional and longitudinal analyses. We tested 2 hypotheses. The first was that relative to controls, UHR individuals would show qualitatively similar WM abnormalities to patients in the FEP but that the magnitude of these abnormalities would be less severe. Our second prediction was that within the UHR group, individuals who later developed psychosis would show more marked abnormalities at baseline than those that did not, and these abnormalities would progress longitudinally because they made the transition to psychosis. To date, DTI studies in subjects at increased risk of psychosis have focused on measuring FA. 20 A subsidiary aim of the present study was to also assess WM integrity in UHR subjects using measures of radial and axial diffusivity. Our corresponding hypothesis was that reductions in FA in UHR subjects would be associated with altered diffusivity. Participants UHR Group. Individuals meeting the Personal Assessment and Crisis Evaluation criteria for the at-risk mental state (n = 32) were recruited from Outreach and Support in South London (OASIS). 7 The diagnosis was based on assessment by 2 experienced clinicians using the Comprehensive Assessment for the At-Risk Mental State (CAARMS). 25 All UHR subjects were naive to antipsychotic medication at the time of the baseline scan. They were followed clinically at monthly intervals during the first year, at 3 monthly intervals during the second and third years, and annually thereafter. Twenty-two UHR subjects completed both clinical follow-up and MRI scan (see figure 1). During this period, 5 developed psychosis and 17 did not. Transition to psychosis was defined according to the criteria in the CAARMS, and a diagnosis of schizophrenia was confirmed at reassessment 12 months after transition using the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) (SCID). 26 Within the transition subgroup (n = 5), the mean interval between the baseline and follow-up MRI scans was 2 years, while in the nontransition subgroup (n = 17), it was 2.4 years. At the time of the second MRI scan, 3 of the transition subgroup and 2 of the nontransition subgroup were receiving antipsychotic medication (Quetiapine). First-Episode Group. Patients from the same geographical area as the UHR sample who had recently presented with an FEP (n = 15) were recruited from the Lambeth Early Onset (LEO) Service. All met DSM-IV criteria for a schizophreniform psychosis 27 at the time of scanning and met SCID (DSM-IV) criteria for schizophrenia 26 when re-assessed 12 months later. Six of the patients were medication naive, and 9 had received less than 3 weeks of treatment with low doses of atypical antipsychotic medication (mean dose in chlorpromazine equivalents = 189 mg/d; mean duration of treatment = 11.6 d). Controls. Healthy volunteers (n = 32) were recruited from the same geographical area, as the clinical subjects via local advertisements, or from the friends of the clinical subjects. Handedness was assessed using the Edinburgh Handedness Inventory. 28 Intellectual function (IQ) was estimated using the Wechsler Adult Intelligence Scale, 3rd Edition (WAIS-III). 29 The severity of symptoms in both clinical groups was assessed with the Positive and Negative Symptoms Scale 30 on the day of scanning by a psychiatrist (J.B.W) trained in its use. Level of function was assessed using the Global Assessment of Functioning scale. 27 Exclusion criteria included history of neurological disorder, history of alcohol, or other substance misuse disorder according to DSM-IV criteria. Participants gave written informed consent to participate in the study, which was reviewed and approved by the Joint South London and Maudsley and Institute of Psychiatry Research Research Ethics Committee. MRI Acquisition Magnetic resonance imaging was performed on all participants on a 1.5 Tesla GE SIGNA NVi scanner (General Electric, Milwaukee, WI). Following localizer, calibration, and structural scans, DTI data were collected using a cardiac-gated single-shot echo-planar sequence with 60 contiguous 2.5 mm thick near axial slices, matrix size 96 3 96 over a 24 cm field of view, zero-filled during reconstruction to a 128 3 128 matrix, final reconstructed voxel size = 1.875 3 1.87.5 3 2.5 mm 3 ; time echo = 107 ms; time repetition = 15 R-R intervals. At each scan location, 7 images without diffusion gradients were acquired together with 64 diffusion-weighted images (b = 1300 s.mm �2 ) with diffusion sensitization directions distributed over a (hemi)sphere. Image Analysis Images were corrected for head movement, eddy currents, and skull-stripped using FSL (http://www.fmrib. ox.ac.uk/fsl/). The 6 elements of the diffusion tensor were calculated for each voxel using linear regression. 13 FA was calculated and data from all subjects realigned to the bicommissural line of a target image (chosen from the study participants) using an affine followed by nonlinear registration. The nonlinear transformation was then applied to each tensor component, and the warped components were recombined into a single tensor file. In order to allow for the effects of this reorientation, the local tensor orientation was adjusted using the ''preservation of principal direction'' algorithm implemented within Camino (http://cmic.cs.ucl.ac.uk/camino). Finally, realigned diffusion tensor images of all subjects were used to create a population-specific template to which each subject's images were then (re-)normalized; this final stage of processing was performed with ''dti-tk'' toolkit (http://www.nitrc.org/projects/dtitk) and is described in more detail below. Population-Specific Template Creation A template was created from our cohort of study (patients and controls) in order to avoid the bias associated with using a standard template from a normal population, which is not representative of our patient group. The population-specific template was generated from tensor images, rather than scalar images, such as FA, using an unbiased diffeomorphic method. 31 Whole tensorbased registration was chosen over FA-based registration because this approach has been reported to align WM regions better than scalar-based registration methods. 31,32 Additionally, templates for FA, axial, and radial diffusivity were calculated from the final tensor template. 13 Normalization to the Population-Specific Template Images were normalized to the population-specific tensor template using a high-dimensional approach. 31 FA, axial, and radial diffusivity maps were calculated. 13 Finally, the International Consortium for Brain Mapping white matter labels atlas (ICBM-DTI-81) 33 was registered to the 3 population-specific templates (FA, axial, and radial diffusivity) using an affine followed by a nonlinear registration and used as brain mask to restrict subsequent analyses to WM only. Group Mapping Analysis The main analysis of the data was performed using XBAM v4, a software package developed at King's College London, Institute of Psychiatry (http://brainmap.co.uk) which implements a nonparametric approach based on permutations to minimize assumptions of data normality and is based on median statistics to control for outlier effects. Relative to parametric analyses, the nonparametric approach has the additional advantage of allowing test statistics incorporating spatial information such as 3D cluster mass (the sum of suprathreshold voxel statistics), which are generally more powerful than other possible test statistics but for which no parametric approximation is known. For the cross sectional analysis, between-group differences in the DTI indices of interest (FA, parallel, and perpendicular diffusivity) were estimated by fitting an ANOVA model at each voxel, whereas for the longitudinal analysis, data were analyzed using a nonparametric repeated measures ANOVA. To test for the interaction between group and time between scans, a series of nonparametric factorial analyses of variance were used. For each analysis, a voxel-level significance threshold within XBAM was initially set to .05 to give maximum sensitivity and to avoid type II errors, and voxels where nonparametric testing of the model gave evidence for rejection of the null hypothesis were highlighted. Three-dimensional spatial clusters generated from the voxels thus highlighted were then subject to more rigorous statistical testing of their cluster mass. At this stage, a cluster mass threshold was computed from the distribution of cluster masses in the permutated data, such that the expected number of type I error clusters under the null hypothesis was less than one over the whole brain. Finally, as multiple groupwise analyses tests were performed on the 3 diffusion indices, results of the cluster analyses were corrected for multiple comparisons in order to further control for false positives. Localization of Findings Because the population-specific tensor template was not in the Montreal Neurological Institute (MNI) space, significant clusters were warped into the standard MNI space. Scalar image templates were coregistered to the IIT2 DTI brain template 34 using an initial affine registration followed by a nonlinear registration. The transformations so calculated were then applied to the group mapping results. The most likely anatomic localization of each cluster highlighted was finally determined by reference to the IIT2 DTI brain template in the spatial coordinates of the ICBM-152 brain template. 34 Statistical Analysis of Clinical and Demographic Data Group-wise measures of clinical and sociodemographic variables were analyzed using one-way ANOVA, t test, and chi-square test where appropriate (PASW 18; SPSS Inc, Chicago, IL). When significant differences were found, the Tukey's honestly significant difference test for pairwise comparisons was applied. Before statis-tical tests, the data were checked for assumptions of normality and equality of variances. If these assumptions were violated, the Mann-Whitney U test was used. Demographic and Clinical Variables A summary of demographic and clinical variables is reported in table 1. Eight UHR subjects (25%) developed psychosis (UHR-P) subsequent to baseline (see figure 1). Five of this UHR-P subgroup were re-scanned after the onset of psychosis, along with 17 (of the 24) UHR subjects who had not developed psychosis. Eight (25%) of the 32 healthy controls were also re-scanned. The investigators tried to contact all of the subjects who had been scanned at baseline. However, some were not contactable, and others declined to be re-assessed. The mean interval between the baseline and follow-up scans in the total UHR sample was 15 months (range 32), while in the UHR-P and UHR-NP subgroups, it was 23.7 (range 15.8) and 29.3 (range 32) months, respectively. The mean interscan interval in controls was 45.7 months (range 28 months). The interscan interval did not differ between the UHR-P and UHR-NP groups but was significantly longer in controls than UHR subjects (P < .001). At the time of the follow-up scan, 77% of the UHR sample were still antipsychotic naive (and were not taking any other psychotropic medication), while 23% of them were receiving antipsychotic treatment. Within the UHR-P and UHR-NP subgroups, the corresponding figures were 40% and 60%, and 88.2% and 11.8%, respectively. In the longitudinal analysis, because the interscan interval was longer in controls than UHR subjects, the time between scans was used as a covariate but in order to allow for the potential confounding effects of correlations between the covariate and independent variables this introduces, we also repeated the longitudinal analysis without covariate adjustment. Cross Sectional Comparisons at Baseline FEP vs UHR vs Healthy Controls. At baseline, there were significant linear relationships for differences across the 3 groups in FA (for a detailed description of the results, see figure 2A and online supplementary table S1) in 2 clusters. In both clusters, FA was lowest in the FEP group, highest in controls, and intermediate in the UHR group (see figure 2B). The first cluster comprised voxels in areas corresponding to the splenium and body of the corpus callosum, the left inferior and SLF, and the left inferior frontooccipital fasciculus (for details, see online supplementary table S1). The second cluster included the right external capsule, the retrolenticular part of the right internal capsule, and the right posterior corona radiata (for details, see online supplementary table S1). There was a larger set of regions where there was a significant linear relationship for radial diffusivity across the groups, with it being greatest in the FEP group, lowest in controls, and intermediate in the UHR group (see figure 3). Linear relationship for axial diffusivity differences was evident in many clusters (see online supplementary figure S4). In some clusters, axial diffusivity was highest in the FEP group and lowest in the controls, but in others, the opposite relationship applied. Post hoc paired comparisons revealed that differences in FA, radial, and axial diffusivities were driven by differences between the FEP and control groups, while the differences between the UHR group and controls were not significant. There were no FA or RD clusters where the linear relationship across groups was in the opposite direction (eg, FA highest in FEP and radial diffusivity lowest in FEP). Findings at Baseline in UHR Subjects Who Later Developed Psychosis Within the UHR group, there were no differences in FA, axial, or radial diffusivity between subjects who developed psychosis (UHR-P) and those who did not (UHR-NP). Similarly, there were no significant differences between the UHR-P group and healthy controls. Longitudinal Analysis There was a significant group (UHR-P vs UHR-NP) by time (baseline vs follow-up scan) interaction on FA in a cluster spanning the anterior limb of the left internal capsule (ALIC), body of the corpus callosum, left superior corona radiata, and left superior frontooccipital fasciculus (P = .0009). In this cluster, there was a longitudinal reduction in FA in the UHR-P group but a slight increase in the UHR-NP group (see figure 4), although these within-group changes were not themselves significant. This result did not change when the ''nuisance covariate'' (time between scans) was excluded from the analysis. There were no significant groups by time interactions for axial or radial diffusivity. Cross Sectional Comparisons at Follow-up At follow-up, there were no significant differences in FA between the UHR subjects who had developed psychosis and the UHR subjects who had not. However, there was greater radial diffusivity in one cluster in middle cerebellar peduncle (P < .001), while axial diffusivity was reduced in the middle cerebellar peduncle and left cerebral peduncle (P = .001) but greater in the right superior and posterior corona radiata, splenium, and body of the corpus callosum (P < .0004; for details, see online supplementary table S2). Discussion Our first hypothesis was that the UHR state would be associated with alterations in WM integrity qualitatively similar to, but less severe than, those seen in schizophrenia. Consistent with this hypothesis, we found that, at baseline, FA, radial diffusivity, and axial diffusivity values in UHR subjects were intermediate relative to those in the first-episode patients and controls. These differences were evident in regions of WM corresponding to the major associative fibers that connect frontoparieto-temporal (SLF) and fronto-parieto-occipital regions (inferior frontooccipital fasciculus), commissural fibers (corpus callosum), and cortico-subcortical pathways (corona radiata, corticospinal tract, and corticopontine tract). The findings in the SLF corroborate previous findings in FEP 18,35 and UHR populations. 22 Our results suggest that structural abnormalities in fronto-temporo-parietal connections are present before the onset of psychosis, in line with functional connectivity findings in UHR subjects. 3 These abnormalities are comparable to those reported in DTI studies of patients with FEP. 18,19,35 We also found reduced FA in the splenium of the corpus callosum in the UHR group. A previous study in UHR subjects did not find differences in this region, 23 whereas reduced FA in the splenium of the callosum has been reported in some studies of FEP. 17,19 Our second hypothesis was that DTI abnormalities would be more marked in UHR subjects who later developed psychosis than in those who did not. Consistent with this prediction, we found a significant interaction between ''group'' (UHR-P and UHR-NP) and time, which post hoc analyses indicated was driven by a longitudinal reduction in FA the UHR-P group ( figure 4). This difference was evident in a region that comprised the left anterior limb of the internal capsule (ALIC), left corona radiata, left superior frontooccipital fasciculus, and anterior body of the corpus callosum. To our knowledge, this is the first evidence that transition to psychosis in UHR subjects is associated with longitudinal changes in WM integrity. The ALIC is traversed by axonal fibers that connect the thalamus to the prefrontal cortex and form part of a circuit linking the frontal lobe and basal ganglia. WM abnormalities have previously been reported in the ALIC, 18 body of the corpus callosum, 18 and along the superior occipitofrontal fasciculus 19 in FEP, and oligodendrocyte abnormalities have been described in the prefrontal WM in schizophrenia. 36 The localization of our findings is also consistent with those in previous DTI studies of individuals at high genetic risk of schizophrenia. 37, 38 We did not find significant differences at baseline between UHR subjects who subsequently developed psychosis and UHR subjects who did not. However, a recent DTI study 24 reported that, at presentation, the former subgroup had lower FA than the latter in the right putamen and left superior temporal lobe but higher FA in a posterior part of the left temporal lobe. This difference in findings may reflect differences in the nature of the respective UHR samples. Although the total number of UHR subjects was similar, a greater proportion developed psychosis in the previous study, which may have provided more power when comparing the UHT-P and UHT-NP subgroups. On the other hand, in the Bloemen study, 24 the proportion of UHR subjects who had already been treated with antipsychotics was significantly greater in the UHR-P than in the UHR-NP group, whereas at baseline, all our UHR subjects were medication naive. The potentially confounding effects of medication are discussed further below. The UHR subjects in the previous study were also around 4 years younger than in our study, and the pattern of DTI findings in patients who develop psychosis appears to vary according to the age of the subject at illness onset. 39 A previous volumetric study of WM in UHR subgroups found that, at baseline, individuals who developed psychosis (UHR-P) significantly differ from subjects who did not (UHR-NP). 6 There are several possible explanations for this difference in findings. Methodological differences between our study and the one by Walterfang et al 6 may account for these inconsistencies. Another possible explanation for this might be that the pathophysiological processes underlying WM volume changes in psychosis may not be reflected by changes in FA. 40 Our results suggest that further research is needed to understand how volumetric and diffusion changes relate to each other in UHR individuals. We also predicted that alterations in FA in UHR subjects would be associated with changes in axial and radial diffusivity. This hypothesis was confirmed, although the relationship between alterations in FA and in diffusivity varied according to the site of the findings (see online supplementary table S3). Alterations in radial diffusivity have been associated with demyelination in animal models. 14 However, we were not able to directly assess myelination. While increased radial diffusivity may suggest axonal damage, 14 in some regions, this was associated with increased axial diffusivity, which is evident when fibers are tightly aligned within the WM. In other areas, radial diffusivity was increased despite no alteration in FA, while in the ALIC, there was reduced FA but no diffusivity changes. Further work is required to clarify the basis of these different relationships between FA and diffusivity. We acknowledge some limitations to our study. Although larger than those in most previous DTI studies of UHR subjects, our sample was still modest, and the findings require confirmation in a larger group. Given the small number of females involved in the study, we could not assess the effects of gender on DTI results. Because 3 of the UHR subjects who developed psychosis had been started on antipsychotic medication by the time of the follow-up scan, we cannot exclude the possibility that the longitudinal findings in this subgroup were influenced by treatment after the onset of psychosis. However, previous DTI studies in schizophrenia have not identified a clear effect of antipsychotic medication on FA. 15 Furthermore, a recent meta-analysis of antipsychotic-naive voxel-based morphometry (VBM) studies showed that cortical volume loss observed at the onset of psychosis is independent of antipsychotic treatment. 41 Group-mapping analyses of FA maps have been criticized for inaccuracies in the alignment of individual images to the template. 42 We used a diffeomorphic deformable tensor-based image registration 31 because it registers DTI images better than other techniques, 32 reduces the susceptibility to false positives due to tensor shape confounds, and improves the sensitivity to detect anisotropy changes. 31 However, we acknowledge that while misregistration can be minimized by the use of voxelbased 31 or tract-based 42 analysis techniques, it can never be completely eliminated, and results should therefore always be interpreted in light of this potential confound. Data were analyzed using a whole brain rather than a region of interest (ROI) approach because the existing DTI literature indicates that multiple tracts are affected in schizophrenia 15 and in UHR populations. 20 In conclusion, these data suggest that the UHR state is associated with reduced WM integrity in similar areas to those affected in FEP, but to a lesser degree. Furthermore, we have provided the first evidence that the onset of psychosis in UHR subjects may be associated with a longitudinal progression of abnormalities in the left frontal WM. Funding This work was supported by a Wellcome Trust Research Training Fellowship awarded to Dr F.C. (086636/Z/08/ Z); a Medical Research Council New Investigator Award conferred to Dr E.B. (G0901310). Dr S.B. was supported by a Joint MRC/Priory clinical research training fellowship (G0501775). This research was in part supported by GlaxoSmithKline, but with no restriction to data access, analysis, or presentation. Prof G.J.B. received honoraria for teaching from General Electric during the course of this study.
2016-05-04T20:20:58.661Z
2012-04-03T00:00:00.000
{ "year": 2012, "sha1": "651844bb5e8e96cb77c3d24c110032fee612b3f4", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/schizophreniabulletin/article-pdf/38/6/1170/16975233/sbs053.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "25d1d5a085c16f1e8aaeb7692840e4ff27be6e95", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
259099778
pes2o/s2orc
v3-fos-license
Case report of unusual synchronous anal and rectal squamous cell carcinoma: clinical and therapeutic lesson Synchronous tumors of the rectum and anus are sporadic. Most cases in the literature are rectal adenocarcinomas with concomitant anal squamous cell carcinoma. To date, only two cases of concomitant squamous cell carcinomas of the rectum and anus are reported, and both were treated with up-front surgery and received abdominoperineal resection with colostomy. Here, we report the first case in the literature of a patient with synchronous HPV-positive squamous cell carcinoma of the rectum and anus treated with definitive chemoradiotherapy with curative intent. The clinical-radiological evaluation demonstrated complete tumor regression. After 2 years of follow-up, no evidence of recurrence was observed. Synchronous tumors of the rectum and anus are sporadic. Most cases in the literature are rectal adenocarcinomas with concomitant anal squamous cell carcinoma. To date, only two cases of concomitant squamous cell carcinomas of the rectum and anus are reported, and both were treated with up-front surgery and received abdominoperineal resection with colostomy. Here, we report the first case in the literature of a patient with synchronous HPV-positive squamous cell carcinoma of the rectum and anus treated with definitive chemoradiotherapy with curative intent. The clinical-radiological evaluation demonstrated complete tumor regression. After 2 years of follow-up, no evidence of recurrence was observed. KEYWORDS SCC, rectal cancer, anal cancer, HPV, chemo-radiotherapy Background Squamous cell carcinoma (SCC) of the rectum is an infrequent malignancy. Only 0.1%-0.3% of rectal cancers (RCs) are represented by the SCC histotype, while adenocarcinoma represents about 90% of RCs (1,2). Similarly, anal cancer is rare, accounting for less than 1% of all new cancer diagnoses and less than 3% of all gastrointestinal tract tumors (3). Synchronous tumors of the rectum and anus are sporadic. Most cases in the literature are rectal adenocarcinomas with concomitant anal SCC. Thus, identification of the optimal treatment in this unusual presentation is challenging and has to be defined case by case. To date, only two cases of synchronous SCC of the rectum and anus are reported, and both were treated with up-front surgery (4,5). Here, we report the first case in the literature of a patient with synchronous SCC of the rectum and anus treated with definitive chemo-radiotherapy (CRT) without subjecting her to radical surgery. Clinical presentation A 68-year-old woman came to our observation for the onset of pain in the anal area, weight loss, and nonspecific abdominal pain for about 2 months. The patient, a former smoker and nondrinker, presented in good condition with a performance status (PS) of 0 according to ECOG and without comorbidities. Blood test values were within the range. About a month before, following the indication of a general practitioner, the patient underwent an ultrasound exam of the abdomen, which resulted in a negative, and a colonoscopy. The endoscopic examination showed the presence of two lesions: a polyp of approximately 4 cm about 10 cm from the anal verge and another ulcerated lesion at the anorectal junction. Biopsies of both lesions were performed. The histological examination, conducted in a local laboratory, revealed in both cases nonkeratinizing SCC. No evidence of gynecological tumors was clinically observed. Due to the unusual endoscopic presentation and histologic report, the case was discussed by a multidisciplinary team to define the best diagnostic and therapeutic flow. It was decided to repeat the endoscopic examination and revise the tumor samples. The pan-coloscopy showed the presence at the level of the anorectal junction of an ulcerated lesion of approximately 15-20 mm and, about 8 cm from the anal verge, a lesion of about 3 cm with a nonlifting sign. A re-biopsy of each lesion was performed. At the microscopical examination, both rectal and anal biopsies confirmed the diagnosis, documenting infiltration by carcinoma with a solid growth pattern. Immunohistochemistry documented positivity for p40 and CK5/6 and negativity for CK20 and CDX2, leading to squamous, nonkeratinizing histotype, according to the WHO 2019 edition of Digestive System Tumors. Moreover, diffuse p16 immunostaining was shown, as observed in human papillomavirus (HPV) infection ( Figure 1). A baseline total body CT scan with the administration of a contrast medium documented the presence of a polypoid lesion at the level of the rectum, at the right posterolateral wall, and at the level of the anal region, where there was pathological thickening. The MRI examination of the rectum showed the presence of a parietal formation with a polypoid appearance and a broad implant Rectal mucosa (A hematoxylin/eosin staining) and anal mucosa (B hematoxylin/eosin staining) infiltrated by squamous non-keratinizing carcinoma. Squamous differentiation underlined from p40 immunostaining (C p40 immunohistochemistry) and negativity for CDX2 (D CDX2 immunohistochemistry). Diffuse p16 immunostaining in neoplasia (E p16 immunohistochemistry). base at the level of the mid-rectum, measuring roughly 37 × 37 × 37 mm (AP × LL × CC) of the posterolateral wall right, causing narrowing of the lumen. This formation showed a restriction of the signal in the Diffusion-weighted imaging (DWI)/ADC sequences and clear pathological impregnation in the post-contrastography phases, infiltrating the mesorectum until it exceeds the mesorectal fascia by about 4.5 mm, determining the extramural vascular invasion and the elevator muscle of the right anus. At the level of the anorectal junction, the presence of heteroplastic tissue with dimensions of approximately 18 × 13 × 23 mm (AP × LL × CC) was highlighted, which infiltrates the internal and external anal sphincter, showing inhomogeneous intensity in T2 and signal restriction in DWI. Furthermore, two lymph node formations were found in the right posterolateral mesorectal fat, one 13 mm from the mesorectal fascia and another in the coccygeal area. Thus, both endoscopic examination and MRI demonstrated no contiguity between the two lesions. Therefore, it was not possible to define whether the two lesions have a common origin or whether they are two distinct neoplasms According to MRI evaluation, rectal cancer staging was T4b N1b CMR+ MVI+, while anal cancer staging was T2 N1a. The multidisciplinary group discussed the case again to define the therapeutic program. Considering that chemoradiotherapy is a standard of care (SOC) for SCC of the anus and that available evidence shows that rectal SCC is also sensitive to this treatment, it was decided to propose concurrent chemoradiotherapy, reserving the option of surgery in the presence of persistence or locoregional progression (1-3). Thus, the patient started treatment with mitomycin c 10 mg/m 2 on days 1-29 scheme plus capecitabine 825 mg/m 2 bis in die (bid) and concomitant radiotherapy on the pelvis and anorectum (total dose, 60 Gy). During therapy, the patient experienced grade 3 diarrhea and grade 2 anal mucositis, which required suspension of the concomitant therapy for 1 week, and symptomatic treatment for diarrhea and anal mucositis was administrated. After regression to grade 1 toxicity, the chemoradiotherapy treatment was continued. Response to treatment was evaluated with clinical, endoscopic, and instrumental criteria after 6 months from the beginning of chemoradiation. A digital rectal examination showed no evidence of disease, as did an endoscopic evaluation. Biopsies were taken during proctoscopy and were negative. Finally, re-evaluation with contrast-enhanced MRI examination showed complete tumor regression with no sign of a viable tumor in the DWI sequence ( Figure 2). After 2 years of followup performed according to the European Society of Medical Oncology (ESMO) guidelines for anal cancer, no evidence of disease was observed (3). The patient maintained a good rectal function after CRT without an impact on her daily life. Discussion The insurgence of SCC in the lower gastrointestinal tract is rare, with most of the tumors originating from the squamous epithelium of the anal canal (2). Primary SCC from the colon and rectum are rare, representing less than 1% of colorectal malignancies (1-5). So far, while different theories have been proposed, the etiology of rectal SCC is still debated (1,2). It has been suggested that rectal SCC could originate from pluripotent stem cells that could differentiate into different lineages (6). Other groups suggested a potential malignant evolution from persistent ectopic embryonal nests of ectodermal cells (7). The presence of chronic inflammation such as intestinal bowel disease (IBD) that causes a persistent irritative stimulus could induce squamous metaplasia and favor the insurgence of rectal SCC (8). HPV and human immunodeficiency virus (HIV) are recognized risk factors for anal cancer (9). Nevertheless, the role of HPV infection in rectal SCC is controversial (10)(11)(12). Audeau and colleagues evaluated the association of HPV in a cohort of 20 patients with SCC of the rectum, adenosquamous tumors, or adenocarcinoma with squamous dysplasia; the authors reported no correlation with HPV 6,11,16,and 18 (10). On the contrary, other case reports or case series found a correlation with HPV positivity in squamous rectal cancer (11, 12). Guerra and colleagues conducted a retrospective analysis on the Surveillance Epidemiology and End Results Database (SEER) to investigate the clinical-pathological characteristics of rectal SCC (13). In a large population of 142 patients diagnosed between 1946 and 2015, the median age was 63, with a predominance in women and in diagnosis in the early stage compared with advanced disease. The presence of synchronous rectal and anal SCC is an uncommon condition, and to date, only two cases were described in the literature (4,5). The first one was a 48-year-old man with an anal and a rectosigmoid SCC with type 2 diabetes as the only comorbidity; no history of smoking or alcohol consumption was described (4). HPV was not tested. The patient underwent abdominoperineal resection, and a permanent colostomy was positioned. Subsequent adjuvant chemoradiotherapy was performed. The second case was a 78-year-old man with a concurrent anal canal SCC and a rectal SCC (5). The patients had an anamnesis of heavy smoking, alcohol drinking, and opium consumption. No comorbidity or viral infection was reported. While the endoscopic and radiological evaluation demonstrated the presence of rectal and anal lesions, the biopsy results were negative. Thus, the patients underwent up-front diagnostic and therapeutic surgery with an abdominal-perineal resection. Histopathology proved the presence of synchronous rectal and anal SCC. After a multidisciplinary discussion, postoperative chemoradiotherapy was proposed. In this scenario, our case could be of interest in different aspects. It represents the first case of concomitant HPV-positive rectal anal SCC described in the literature. It is very difficult to assess if the rectal SCC was a metastasis of anal cancer or a second malignancy. Repeated endoscopy evaluation, CT scan, and high-quality RMI do not demonstrate a clear contiguity between the two lesions. Moreover, from a clinical point of view, it is intricate to correlate a small anal cancer with a significantly more advanced rectal SCC. However, in both lesions, the presence of an HPV infection could have clearly contributed to the pathogenesis. Unfortunately, like in the other two cases, due to the lack of an adequate tumor sample, it was not possible to perform a genetic evaluation to discriminate if the two lesions have a common or distinct origin or to evaluate HPV genetic typing. This aspect could represent a limitation that deserves to be investigated by further translational prospective studies/case series. In the last decades, definitive CRT has emerged as the SOC for early and locally advanced anal SCC with curative intent (3). In cases of persistent disease or locoregional recurrent disease, surgery could represent a therapeutic option. Due to its infrequent occurrence, there has been no prospective study investigating the optimal treatment for rectal SCC (1). Historically, up-front surgery was proposed; however, it was complicated by significant comorbidity and mortality (1,14). Therefore, definitive CRT has been proposed to improve outcomes and preserve organs (1,15,16). A French retrospective study included 23 patients treated in two referral institutions. CRT exhibited a really high rate of clinical complete response at 83%. The 5-year disease-free survival rate was 81%, while the 5-year overall survival rate was 86%. Remarkably, the 5-year colostomy-free survival rate was 65%. In another series of nine patients with locally advanced or metastatic rectal SCC, induction with docetaxel, 5-fluorouracil, and cisplatin (DCF) determined a promising response rate that was further increased after chemoradiation (16). For patients with metastatic disease, no evidence based on prospective studies is currently available. In a case series from the Mayo Clinic, 52 patients with advanced rectal SCC were included; however, the exact number of cases with metastatic disease was not indicated (17). Based on these findings, it is reasonable to treat rectal SCC similarly to anal SCC with CRT in cases of locally advanced disease and platinum-based chemotherapy in combination with 5fluorouracil or taxane for metastatic disease (1). Intriguingly, our case report is the first one to use a conservative approach for synchronous rectal and anal lesions. The patient received the combination of mitomycin c 10 mg/m 2 on days 1 and 29 together with capecitabine 825 mg/m 2 (bid) with concurrent radiotherapy for a total dose of 60 Gy with curative intent. According to the ESMO guidelines, while the optimal dose for curative CRT is not known, for patients with locally advanced anal cancer, the radiotherapy dose should be >50.4 Gy (3). In the absence of prospective studies, we can consider these recommendations also valid for rectal SCC. The definition of the best time for tumor assessment in rectal SCC is not yet defined. In the ACT II study, it has been shown that a significant proportion of patients with anal SCC treated with CRT do not exhibit a complete response when assessed at 10-12 weeks and could display a complete tumor regression at 26 weeks from the beginning of CRT (18). Clinical and radiological evaluation after 6 months of the beginning of chemoradiation showed no evidence of disease and a complete clinical response. After a longer follow-up of 2 years, no evidence of occurrence was observed without residual toxicity. Conclusion Rectal SCC is an uncommon malignancy with limited evidence to guide treatment decisions. In this scenario, we report the first case of synchronous rectal and anal HPV SCC treated with conservative CRT. While more cases are needed to better understand the biology and multidisciplinary approach, we think that our case report could be of interest in this orphan disease. Patient perspective When I started my oncological journey, I was full of fears. I met doctors who helped and supported me through the hardest of times. Thanks to teamwork, more than 2 years after the diagnosis, I recovered and went back to living normally. Data availability statement The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. Ethics statement Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article. Author contributions Conceptualization: DC. Original writing: DC, ST, and PP. Data collection: all the authors. Supervision: RG, SC, FC, AR, and EM. All authors contributed to the article and approved the submitted version.
2023-06-08T13:14:58.564Z
2023-06-08T00:00:00.000
{ "year": 2023, "sha1": "3eeec202a813d8170253db71aa9b1e5126beed17", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "3eeec202a813d8170253db71aa9b1e5126beed17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232765563
pes2o/s2orc
v3-fos-license
A novel anti-human IL-1R7 antibody reduces IL-18-mediated inflammatory signaling Unchecked inflammation can result in severe diseases with high mortality, such as macrophage activation syndrome (MAS). MAS and associated cytokine storms have been observed in COVID-19 patients exhibiting systemic hyperinflammation. Interleukin-18 (IL-18), a proinflammatory cytokine belonging to the IL-1 family, is elevated in both MAS and COVID-19 patients, and its level is known to correlate with the severity of COVID-19 symptoms. IL-18 binds its specific receptor IL-1 receptor 5 (IL-1R5, also known as IL-18 receptor alpha chain), leading to the recruitment of the coreceptor, IL-1 receptor 7 (IL-1R7, also known as IL-18 receptor beta chain). This heterotrimeric complex then initiates downstream signaling, resulting in systemic and local inflammation. Here, we developed a novel humanized monoclonal anti-IL-1R7 antibody to specifically block the activity of IL-18 and its inflammatory signaling. We characterized the function of this antibody in human cell lines, in freshly obtained peripheral blood mononuclear cells (PBMCs) and in human whole blood cultures. We found that the anti-IL-1R7 antibody significantly suppressed IL-18-mediated NFκB activation, reduced IL-18-stimulated IFNγ and IL-6 production in human cell lines, and reduced IL-18-induced IFNγ, IL-6, and TNFα production in PBMCs. Moreover, the anti-IL-1R7 antibody significantly inhibited LPS- and Candida albicans–induced IFNγ production in PBMCs, as well as LPS-induced IFNγ production in whole blood cultures. Our data suggest that blocking IL-1R7 could represent a potential therapeutic strategy to specifically modulate IL-18 signaling and may warrant further investigation into its clinical potential for treating IL-18-mediated diseases, including MAS and COVID-19. Initially identified as an IFNγ-inducing factor, interleukin-18 (IL-18) is a member of the IL-1 family of cytokines (1)(2)(3). Similar to IL-1β, IL-18 is synthesized as an inactive precursor requiring processing by caspase-1 into an active (mature) cytokine (4). IL-18 forms a signaling complex by binding to the IL-1 receptor 5 (IL-1R5, also known as IL-18 alpha chain), which is the ligandbinding chain for mature IL-18; however, this binding is of low affinity. In cells that express the coreceptor, termed IL-1 receptor 7 (IL-1R7, also known as IL-18 receptor beta chain), a high affinity complex is formed. With the juxtaposition of Toll-IL-1 receptor (TIR) domains in the cytosolic segment of the IL-18 receptor complex, downstream inflammatory signaling is initiated including sequential recruitment and activation of MyD88, IRAKs, TRAF6, and NFκB (1). IL-18 is upregulated in many diseases including inflammatory bowel diseases (IBD), macrophage activation syndrome (MAS), and COVID-19 (1,. Thus, there is considerable interest to develop IL-18 inhibitors to treat these diseases. The activity of IL-18 is kept low by its natural inhibitor, the IL-18-binding protein (IL-18BP), which provides a competing high-affinity binding site for IL-18 (31). Clinical studies reveal that blocking IL-18 with IL-18BP reduces the severe life-threatening colitis in children with the NLRC4 mutation (32). In addition, blocking IL-12, IL-18, and IFNγ has shown to reduce the severity of experimental IBD in mice (33)(34)(35). Importantly, neutralization of IL-18 with anti-IL-18 antibodies or IL-18BP is effective in both dextran sodium sulfate (DSS) and trinitrobenzoic sulfonic acid (TNSB)-induced models of IBD and reduces intestinal IFNγ and TNFα, demonstrating IL-18 as a pivotal mediator in experimental colitis (34,36,37). MAS, which is also known as secondary hemophagocytic lymphohistiocytosis (sHLH), is characterized by a severe hyperinflammatory state with pancytopenia, liver dysfunction, increased D-dimer and ferritin, and coagulopathy (26). A severe IL-18/IL-18BP imbalance was found in MAS patients where the plasma concentrations of IL-18 were 20-30-fold higher than in patients with rheumatic arthritis (38)(39)(40)(41)(42). In addition, MAS is observed in COVID-19 patients with severe disease (28,43). The serum levels of IL-18 were significantly higher in the COVID-19 patients with MAS compared with COVID-19 patients without MAS (28) and were associated with disease severity and poor clinical outcome in COVID-19 patients (29,30). Patients with a gain-of-function mutation in NLRC4 (32) or deficiency in X-linked inhibitor of apoptosis (XIAP) (19) experience a life-threatening hyperinflammatory state with high levels of free IL-18 that is similar to MAS; treatment of these patients with IL-18BP alleviates the inflammatory state (26). In addition, markedly elevated plasma IL-18 levels are present in patients with systemic juvenile idiopathic arthritis (sJIA) or systemic inflammatory adult-onset Still's disease (AOSD), which are at high risk of developing lifethreatening MAS (22,39,42,44,45). Treatment with anakinra, a natural antagonist for the IL-1 receptor, is effective for the patients with sJIA or AOSD who develop MAS (26,46,47). The mechanism here includes a reduction in the processing of the inactive IL-18 precursor into an active cytokine (48). Moreover, IL-18BP has also been used effectively to treat patients with refractory AOSD and sJIA and demonstrated early signs of clinical and laboratory marker efficacy (49,50). Together, these findings suggest that IL-18 neutralization can contribute to the resolution of the hyperinflammatory state. Although IL-18 is a validated therapeutic target for treating IBD and MAS, IL-1R5 also serves as a receptor for the antiinflammatory cytokine IL-37 (51,52). Therefore, antibodies against IL-1R5 would concurrently block endogenous IL-37 and its anti-inflammatory functions. In addition, because of the high affinity of IL-18BP for IL-18, IL-18BP also binds IL-37 (53,54). Thus, use of IL-18BP to reduce the activity of IL-18 has the disadvantage of binding to IL-37 and reducing the function of IL-37 in disease. In fact, several studies have reported inflammatory diseases associated with low IL-37 (55)(56)(57). The antiinflammatory properties of IL-18BP are lost at high doses (58). Indeed, there are data revealing that blocking IL-1R5 with antibodies or using IL-18BP exacerbates inflammation (59,60). Different from other promiscuous accessory proteins in the IL-1 receptor family such as IL-1R3 (61), IL-1R7 is the sole accessory chain for IL-1R5 and IL-18 signaling (62). IL-1R7 is essential for the recruitment and activation of IRAK, which is required for IL-18-induced signaling and function (63)(64)(65)(66). Most importantly, anti-IL-1R7 allows for targeting IL-18 specifically without affecting endogenous IL-37 signaling and is the rationale for the development of anti-IL-1R7. The anti-IL-1R7 antibody used in the present study bound specifically to human IL-1R7 and contained the Fc-LALA mutation to prevent the triggering of FcγRs (61,67). Using this novel antibody, we carried out in vitro cultures to assess the effectiveness of anti-IL-1R7 in inhibiting IL-18 activities in both human cell lines and primary cells. We found that the anti-IL-1R7 antibody specifically suppresses IL-18mediated proinflammatory signaling and subsequent cytokine production. Data from these studies suggest that blocking IL-1R7 could be a potential therapeutic strategy to specifically modulate IL-18 signaling and IL-18-related inflammatory diseases including MAS and possibly in patients with MAS-like clinical manifestations of COVID-19. Results The binding specificity of anti-IL-1R7 antibodies to human IL-1R7 We selected two anti-IL-1R7 antibodies to determine binding to human cell lines. These antibodies (MAB 300 and MAB 304) were humanized IgG1 and expressed with the LALA sequence. They were developed to target human IL-1R7 (hIL-1R7) and thus to inhibit assembling of the IL-18/IL-1R5/ IL-1R7 ternary complex and the subsequent proinflammatory signaling of IL-18. The binding capacities of the antibodies were first tested by titration to either immobilized recombinant human IL-1R7 or recombinant rhesus monkey IL-1R7 (rhIL-1R7). As shown in Figure 1A, two different anti-IL-1R7 antibodies MAB300 (left) and MAB304 (right) both bind immobilized recombinant human or rhesus IL-1R7 protein dose-dependently with a maximum binding capacity achieved between the concentration of 1-10 μg/ml. The EC 50 values of the fitted binding curves are shown in Table 1. MAB 300 binds to human and rhesus monkey IL-1R7 with similar EC 50 values of 18.4 and 19.7 ng/ml, respectively, whereas MAB 304 binds to human and rhesus monkey IL-1R7 with EC 50 values of 14.2 and 13.6 ng/ml, respectively. Next, we further analyzed the binding of the antibodies to cells ectopically expressing human-or mouse-IL-1R7. As presented in Figure 1B, similar to the recombinant protein binding in Figure 1A, both MAB 300 and MAB 304 antibodies bind efficiently to HEK-293-FreeStyle cells transiently expressing full-length human-IL-1R7 encoding DNA. MAB 300 binds to hIL-1R7 expressing cells in a dose-dependent manner and an EC 50 of 64.6 ng/ml (see Table 1), while MAB 304 binds with an EC 50 of 39.6 ng/ml. Importantly, both antibodies do not bind to mouse-IL-1R7 (mIL-1R7) expressed on HEK-293-FreeStyle cells ( Fig. 1B and Table 1). Effects of the anti-IL-1R7 antibodies on IL-18-mediated proinflammatory signaling and cytokine production in human cell lines We carried out experiments using in vitro cell model systems to characterize the activity of the anti-hIL-1R7 antibodies in blocking IL-18-mediated proinflammatory signaling and cytokine production. First, HEK-Blue-IL-18 cells stably transfected with an NFκB-driven reporter gene construct were used to assess blockage of IL-18-induced proinflammatory signaling. Figure 2A shows the inhibition of IL-18-induced proinflammatory signaling by anti-IL-1R7 and the reference anti-IL-1R7 monoclonal antibody (mAB) MAB1181 in HEK-Blue IL-18 cells, respectively. Whereas the reference antibody MAB1181 reduces the production and secretion of the reporter to a limited extent, MAB300 and MAB304 significantly block IL-18-mediated signaling in this cell line with an EC 50 value of 2851 ng/ml and 3750 ng/ml, respectively ( Fig. 2A and Table 2 upper). We also used the human lung epithelial A549 cells stably transfected with the human IL-1R7 encoding gene alone (A549-hIL-1R7) or with both human IL-1R7 and IL-1R9 genes (A549-hIL-1R7/9), and human KG-1 cells to test the effect of anti-IL-1R7 on inhibition of IL-18-induced release of IL-6 and IFNγ. In the A549-hIL-1R7/9 cells, anti-IL-1R7 MAB300 and MAB304 significantly block IL-18-induced release of the IL-6 cytokine with an EC 50 value of 336 and 994 ng/ml, respectively (see Fig. 2B and Table 2, middle). Again, the extent of the Anti-IL-1R7 suppresses IL-18 pathway inhibition and the potency of anti-IL-1R7 are significantly higher as compared with the reference antibody MAB1181. Similarly, anti-IL-1R7 also potently inhibits IL-18-induced IFNγ release in human KG-1 cells (Fig. 2C). The EC 50 value for this inhibition is 40.3 ng/ml for MAB300 and 804 ng/ml for MAB304, whereas the reference antibody MAB1181 inhibits only at very high concentrations ( Fig. 2C and Table 2, bottom). Overall, the results demonstrate that the newly developed anti-IL-1R7 MAB 300 and MAB 304 provide robust inhibition of IL-18-induced signaling and proinflammatory cell activation in different in vitro cell systems. In the systems used above, anti-IL-1R7 MAB 300 showed the best potency compared with both MAB 304 and the reference antibody, and this difference is most prominently observed in the KG-1 IFNγ release assay. Therefore, in the subsequent cell cultures, only MAB300 was further tested as an anti-IL-1R7 antibody in comparison to a human IgG1 isotype control antibody. First, we compared side by side the efficiency of anti-IL-1R7 MAB300 and isotype control antibody on IL-18-induced IL-6 in A549-hIL-1R7 cells (68). As shown in Figure 2D, anti-IL-1R7 robustly inhibits IL-18-induced IL-6 (70% reduction) in the cell culture at 1, 5, and 10 μg/ml, with similar potency as the natural IL-18 inhibitor IL-18BP. In contrast, the isotype control has no effect on IL-18-induced IL-6. We also tested the effect of anti-IL-1R7 on IL-1β-induced IL-6 and IL-1α. A moderate inhibitory effect of anti-IL-1R7 was observed on IL-1β-induced IL-6 (around 10% inhibition at 1 μg/ml and 30% at 5 μg/ml) in the same cells and no effect was observed on IL-1β-induced IL-1α (Fig. S1). Thus, we continued with our evaluation of MAB300 (now indicated as anti-IL-1R7) in primary human cell cultures. Table 1 The EC 50 values of fitted binding curves of anti-IL-1R7 antibody to immobilized recombinant human or rhesus IL-1R7 protein (upper) or to HEK-293-FreeStyle cells transiently expressing full-length humanor mouse-IL-1R7 encoding DNAs (bottom) as shown in Figure 1 Antibody hIL-1R7/EC 50 Figure 1A (upper) and Figure 1B (bottom). Anti-IL-1R7 suppresses IL-18 pathway Effects of anti-IL-1R7 antibody on LPS-induced cytokine production in human PBMC and whole blood cultures As an IFNγ-inducing factor, IL-18 is constitutively expressed in fresh human PBMCs and whole blood (71). It is required for and also facilitates LPS-induced IFNγ in PBMCs and whole blood (71,72). Thus, we further measured the effects of anti-IL-1R7 on the production of LPS-induced IFNγ and other cytokines in both PBMC and whole blood cultures. As shown in Figure 4A, anti-IL-1R7 specifically inhibited 24-h LPS-induced IFNγ in PBMC cultures, with a nearly 85% inhibition at 10 μg/ml of the antibody. This reduction is comparable to or even greater than the reduction by IL-18BP. There was no significant reduction on LPS-induced TNFα, IL-6 and IL-1β using either anti-IL-1R7 or IL-18BP (Fig. 4, B-D). In contrast, while not affecting IL-12/18-induced cytokines, IL-1Ra reduced LPS-induced IFNγ, TNFα, and IL-1β, consistent with an important role of IL-1 in LPS-induced inflammatory signaling (73). Similar results were observed in 3-day LPS-induced cytokines (Fig. S3). In parallel, we also assessed the function of anti-IL-1R7 on LPS-induced cytokines in whole blood cultures. In line with the effects observed in PBMC cultures, anti-IL-1R7 inhibited LPS-induced IFNγ (73%) in whole blood cultures. In these same cultures, we found no reduction in LPS-induced TNFα or IL-6 ( Fig. 5). IL-18BP and IL-1Ra suppressed LPS-induced IFNγ, TNFα, or IL-6, similarly as observations in PBMC cultures. Effects of anti-IL-1R7 antibody on Candida-induced cytokine production in PBMC cultures Candida has been shown to markedly induce Th1 lymphocyte activation and IFNγ production in PBMCs after 48 h (74) and IL-18 mediates the Candida-induced IFNγ production (75). We thus assessed the effect of anti-IL-1R7 on Candidainduced cytokine production. As presented in Figure S4, Table 2 The EC 50 values of fitted binding curves of the inhibition of anti-IL-1R7 to IL-18-mediated proinflammatory signaling and cytokine production in human HEK-Blue-IL-18 cells (upper) or A549 cells stably transfected with the human IL-1R7/9 genes (middle) or KG-1 cells (bottom) as presented in Figure 2, A-C Figure 2A (upper), Figure 2B (middle), and Figure 2C (bottom). Discussion In summary, our data have confirmed the binding affinity and specificity of the novel LALA-mutated anti-human IL-1R7 (MAB 300) to both recombinant and cell surface expressed Anti-IL-1R7 suppresses IL-18 pathway human IL-1R7 and demonstrated the efficacy of the antibody in inhibiting IL-18-mediated inflammatory signaling, responses, and cytokine production. We observed similar trends of inhibitory effects between the newly developed anti-IL-1R7 and IL-18BP on IL-18-mediated inflammatory responses and cytokine production. However, different from IL-18BP, our antibody selectively binds the human IL-1R7 with a high affinity in the nanomolar range and prevents IL-18 signaling without affecting the anti-inflammatory signaling of IL-37 ( Figure 6 and Fig. S5) (26). The antibody does not interfere with IL-1R5, which is needed for binding IL-37 (52). And it does not interfere with IL-18BP binding to IL-37 (53). In fact, any reduction in IL-37 levels due to binding to IL-18BP can result in greater inflammation. Thus, the specificity of anti-IL-1R7 for IL-18 blockade is the rationale to prevent IL-18 activity. In any IL-18-related pathological condition, the outcome of blocking IL-18 correlates with the concentration of free, active IL-18, the surface level of IL-1R5, the presence of IL-1R7, and the level of IL-18BP (1). In health, the naturally occurring IL-18BP binds IL-18 with a high affinity (0.5 nM) and markedly low concentrations of free IL-18 are available, if any, to trigger Anti-IL-1R7 suppresses IL-18 pathway the IL-1R5. IL-1R5 is thus available to bind the antiinflammatory cytokine IL-37. However, in diseases with hyperinflammatory status, such as MAS, large amounts of free IL-18 are produced to bind IL-1R5 and less IL-1R5 becomes available for IL-37 to function as an anti-inflammatory cytokine. On the other hand, if the concentration of IL-18BP increases and exceeds the need to bind IL-18, IL-37 can bind to the excess IL-18BP and is not available for promoting its antiinflammatory portfolio (52,53,76). This concept fits well with a recent finding from a Dutch study where 300 patients at high risk for a cardiovascular event had high levels of IL-18BP (77). In that study, biomarkers of risk such as CRP correlated with the level of IL-18BP. Therefore, considering that IL-18BP or anti-IL-1R5 antibodies could interfere the anti-inflammatory activity of IL-37 in humans (51-54), the clinical application of anti-IL-1R7 would be more precise in treating IL-18mediated diseases. In contrast to MAB1181, a current commercially available reference monoclonal antibody for anti-IL-1R7, our novel anti-IL-1R7 (MAB300) shows a twofold greater ability in suppressing IL-18-activated NFκB signaling and IL-6 or IFNγ production and a higher efficiency than another candidate antibody we developed (MAB304) (Fig. 2). In the experiment of Figure 2C, we unexpectedly observed a lower basal level of secreted IFNγ for MAB304. This may be related to some variation in the number of cells seeded on the culture plate. In addition, it should be noted that the EC 50 values for inhibition of IL-18-induced cell activation differ significantly between the different cell systems tested (Table 2). This may be explained by the artificial gene reporter setup used in HEK-Blue IL-18 cells to measure IL-18 blockade and the expression levels of transfected IL-18 receptors in the A549-IL-1R7/9 cells. In our PBMC cultures, the reference antibody MAB1181 does not significantly inhibit IL-12/IL-18-induced IFNγ production in primary PBMCs (Fig. S2). Targeting at different protein fragments in IL-1R7 as immunogens may affect the bioactivities of the antibodies robustly in vivo. Recently, Liu et al reported the development of a synthetic human antibody via phage-display system, which can antagonize IL-1R7 and its signaling through an allosteric mechanism (78). We compared our data from a similar KG1 assay for IL-18-induced IFNγ release and observed an IC50 of 40 ng/ml with our lead candidate (MAB300) (Fig. 2C). This corresponds to an IC50 of 0.26 nM and is more effective than the reported IgG 3131 with an IC50 of 3 nM by Liu et al. (78). Moreover, our antibodies have incorporated an Fc-LALA (L234A/L235A) substitution with an advantage to prevent the triggering of FcγRs (61,67). In the previous study using an Fc-mediated gene reporter assay (61), antibody with the LALA mutation completely abrogated Fcmediated effector cell functions without cytotoxic potential. In our human A549 cell study, it is noteworthy that we tested the anti-IL-1R7 in two different but similar A549 cell lines. In the A549 cell line expressing both IL-1R7 and IL-1R9, MAB1181 was used as the reference monoclonal anti-IL-1R7 antibody to compare the inhibitory effects of anti-IL-1R7 antibodies MAB 300 and MAB304 on IL-18-induced IL-6 ( Fig. 2B). In the A549-IL-1R7 cell line where only IL-1R7 was stably overexpressed, an anti-Digoxigenin antibody was used as a nonbinding isotype control in parallel to compare the effect of anti-IL-1R7 (MAB300) on IL-18-induced IL-6 ( Fig. 2D). In both experiments, MAB300 suppressed IL-18induced IL-6 with similar reductions: 66% inhibition at 1 μg/ml and 70% inhibition at 10 μg/ml. The results from the two A549 cell line cultures confirmed the dependency of IL-1R7 in IL-18 signaling. Not surprisingly, the expression of IL-1R9 had no effect on the activity of IL-18. In contrast to the marked inhibition on IL-18-induced IL-6 production from A549-IL-1R7 cells by anti-IL-1R7, we detected a moderate reduction on IL-1β-induced IL-6. No effect was observed on IL-1β-induced IL-1α production (Fig. S1). As there are no T cells or NK cells in the A549 cell cultures, there is likely no role of IFNγ on the IL-1β signaling as can take place in PBMC cultures (79). The minor inhibition on IL-1β-induced IL-6 we observed in the A549 cells may be explained by an effect of the anti-IL-1R7 on the NF-κB signaling in the cells via the stably overexpressed IL-1R7 (68). We further characterized the function of the anti-IL-1R7 antibody on primary human PBMC and whole blood A B Anti-IL-1R7 suppresses IL-18 pathway cultures. First, we assessed the effect of the antibody on IL-12/ IL-18-induced IFNγ secretion in PBMCs. This is a direct IL-18-stimulated inflammatory response and the suppressive effect of anti-IL-1R7 is straightforward and dose-dependent. Interestingly, besides IFNγ, both anti-IL-1R7 and IL-18BP inhibit IL-12/IL-18-induced TNFα production (Fig. 3B). This is consistent with previous studies where IL-18 was shown to induce TNFα production and IL-18BP reduces Staphylococcus epidermidis-induced TNF-α production in human whole blood (80)(81)(82)(83). Though IL-12/IL-18-induced IL-1β production was below detection, an inhibitory effect was observed on IL-12/IL-18-induced intracellular IL-1α by anti-IL-1R7 and IL-18BP (Fig. S6A), indicating a potential impact of IL-18 signaling on IL-1. It is not surprising that in comparison to IL-18BP, our anti-IL-1R7 presented a relatively weaker inhibition on IL-12/IL-18-induced cytokine production (Fig. 3). IL-18BP is known to be a natural inhibitor to IL-18 (31) and directly binds and blocks the activity of IL-18. In contrast, anti-IL-1R7 indirectly suppresses the activity of IL-18 by blocking the function of its coreceptor IL-1R7. The detailed mechanism by which our novel anti-IL-1R7 suppresses the function of endogenous IL-1R7 and how it regulates the association of IL-1R7 to IL-1R5 and/or IL-18 to initiate the downstream IL-18 signaling would be worthy of further investigation. In the attempt to test the effects of anti-IL-1R7 on pathogen-activated inflammatory responses where other signaling besides IL-18 is involved, we found that the antibody also significantly suppressed LPS-induced IFNγ in both PBMC and whole blood cultures (Figs. 4 and 5; and Fig. S3). This is consistent with the requirement of IL-18 in LPS-induced IFNγ production in PBMCs (71,84). Though no obvious effect was observed in LPS-induced TNFα, IL-6, or IL-1β, LPS-induced intracellular IL-1α was found to be downregulated by anti-IL-1R7 (Fig. S6B), suggesting a potential involvement of IL-18 on LPS-mediated IL-1 signaling. In the Candida model, a significant suppression was detected in Candida-induced IFNγ while no effect was observed on other cytokines such as TNFα, IL-6, IL-1β, or IL-1α (Figs. S4 and S6C). Moreover, the effect of either anti-IL-1R7 or IL-18BP on Candida-induced IFNγ production was smaller than that of LPS and IL-12/IL-18. We postulate that this might be due to the various pattern recognition receptors and signaling pathways that mediate the complexed Candida-host immune responses (85,86), in which IL-18 plays a relatively minor role. Notably, in the recent COVID-19 outbreak, a cytokine profile resembling sHLH was found to be associated with COVID-19 disease severity, characterized by increased cytokines such as IFNγ, MCP-1, MIP1-α, and TNFα (87). Moreover, high levels of IL-18 were found to be associated with disease severity and poor clinical outcome in COVID-19 patients (27,29,30,88,89). These findings shed light on the role of IL-18 in the COVID-19 pathogenesis and indicate a potential of high plasma IL-18 as a disease marker in the prognosis and treatment of severe COVID-19 patients. Importantly, the COVID-19 pandemic has brought attention to a virally induced hyperinflammatory lung injury, sometimes evolving to cytokine storm syndrome, multiple-organ failure, and death (90,91). This finding mirrors what has been observed in MAS (92,93). Indeed, MAS was found to present in some COVID-19 patients and a significantly higher serum IL-18 level was observed in the patients with MAS than patients without MAS (28,43). In the same study, patients with or without MAS also present higher serum IL-18 than healthy controls and IL-18 level was significantly higher in nonsurvivors compared with survivors (28). Similarly, in SARS caused by SARS-CoV-1, IL-18 concentration was found to be considerably elevated compared with those in healthy subjects and was significantly higher in nonsurvivors compared with survivors (94,95). IL-18 was involved in an IFNγ-related cytokine storm in the patients (94). Moreover, IL-18 and IL-1R7 are found to be highly expressed in cell-to-cell communication among immune cells in COVID-19 patients (96) and elevated IFNγ was observed in COVID-19 patients in line with increased IL-18 levels (30,88,89). However, their exact function remains unknown. Altogether, results from this study set the stage for future studies to characterize the in vivo function of this novel antibody in clinical studies of IL-18-mediated diseases such as MAS, IBD, and rheumatic diseases (5,26). Further research on its application will not only provide new mechanistic insights into the function of IL-18 in disease, but also will likely identify novel therapeutic targets for treating IL-18-mediated diseases. In particular, patients carrying the NLRC4 mutation with lifethreatening enterocolitis could potentially benefit from such an antibody specific to IL-18 inhibition (32). Whether anti-IL-1R7 antibody could also help to reduce the cytokine storm and associated organ damages in COVID-19 will also be worthy of further exploration. Antibodies and reagents The anti-human IL-1R7 antibody was generated by immunization of New Zealand white rabbits (Charles River Laboratories) with human recombinant IL-1R7 protein. Antihuman IL-1R7 antibody and nonbinding isotype control antibody were produced as hIgG1-LALA isotype in HEK293-FreeStyle cells from Thermo Fisher Scientific and purified from the supernatant using protein-A affinity chromatography followed by size-exclusion chromatography (MAB Discovery GmbH). The antibodies have an incorporated double substitution, LALA that significantly reduces binding to FcγRs and thus can avoid Fc-mediated effector functions (61,67). The antibodies were then dissolved in the buffer with 20 mM Histidine, 140 mM NaCl at pH 6, divided into aliquots, and stored at -80 C before use. Lipopolysaccharide (LPS) Escherichia coli (055:B5) was purchased from Sigma. Heatkilled Candida albicans UC820 was kindly provided by Professor Mihai Netea (Radboud University Medical Centre). Human IL-12 was from PeproTech. Human IL-18 and IL-1β were from Bio-Techne. Recombinant human IL-37 46-218 was produced as described earlier (97). The reference anti-IL-1R7 monoclonal antibody MAB1181 of R&D Systems (R&D MAB1181 Reference mAB) and the anti-IL-37 monoclonal Anti-IL-1R7 suppresses IL-18 pathway antibody were from Bio-Techne. Clinical-grade recombinant human IL-18BP was a gift provided by Serono pharmaceutical research institute (SPRI). Human IL-1Ra (Anakinra) was the kind gift of Amgen. For cytokine measurements, the corresponding ELISA DuoSets kits for human IL-1β, TNFα, IL-6, IFNγ, and IL-1α were from Bio-Techne. Immobilized ELISA binding of anti-IL-1R7 to human IL-1R7 Nunc 384-well MaxiSorp plates were coated with recombinant human IL-1R7 extracellular domain (hIL-1R7-FC; MAB Discovery GmbH), or recombinant rhesus monkey IL-1R7 extracellular domain (Sino Biological Inc; #90122-C08H), at a concentration of 0.5 μg/ml in PBS for 60 min at room temperature. Plates were washed three times with wash buffer (PBS 0.1% Tween) and blocked with PBS, 2% BSA, 0.05% Tween for 60 min at room temperature. After three washes with wash buffer, antibodies were added in ELISA buffer (PBS, 0.5% BSA, 0.05% Tween) at concentrations ranging from 10 μg/ml to 0.006 ng/ml (1:3 dilution series) and were incubated for 60 min at room temperature. Plates were washed three times with wash buffer, followed by incubation with antihuman-F(ab) 2, peroxidase-linked secondary antibody (goat, AbD Serotec) at a dilution of 1:5000 in ELISA buffer for 60 min at room temperature. Plates were washed six times with wash buffer before TMB substrate solution (Invitrogen; 15 μl/well) was added. After 5 min of incubation, stop solution (1 M HCl, 15 μl/well) was added and absorbance (450 nm/ 620 nm) measured using a Tecan M1000 plate reader. Fitting curves and EC 50 calculation were done using GraphPad Prism 8. Cell binding of anti-IL-1R7 to human IL-1R7 HEK-293-FreeStyle cells were transfected with DNAs encoding full-length human or mouse-IL-1R7 and using 293-Free Transfection Reagent (Merck). Twenty-four hours after transfection, cells were seeded in a 96-well round bottom plate at a cell density of 1 × 10 6 cells/ml in stain buffer (BD). Anti-hIL-1R7 antibody was added to a final concentration ranging from 10 μg/ml to 0.06 ng/ml and incubated for 1 h in the dark at 4 C. Cells were washed once with 150 μl DPBS and incubated with Alexa Fluor 488-conjugated goat F(ab)2 antihuman IgG (H + L) (Jackson ImmunoResearch Laboratories; Cat. no. 109-546-003) at a concentration of 0.8 μg/ml in stain buffer. Cells were washed once with 150 μl DPBS and resuspended in 150 μl stain buffer containing 1:500 diluted DRAQ7 solution (Abcam; Cat: ab109202; 0.3 mM). Cells were analyzed using a BD FACSVerse flow cytometer. PBMC cultures The study was approved by (COMIRB) Colorado Medical Institutional Review Board and abides by the Declaration of Helsinki principles. Venous blood from healthy consenting donors was drawn into lithium heparin containing tubes and PBMCs were isolated using centrifugation over Ficoll-Hypaque cushions as previously described (51,98,99). Cells were washed three times with saline and resuspended in RPMI at 5 × 10 6 /ml. For IL-12/IL-18 stimulation, 0.5 × 10 6 cells were seeded per well in 96-round bottom well plates and cultured in a total of 200 μl for 24 h, with or without the combination of 2 ng/ml IL-12 + 20 ng/ml IL-18 in the presence of different concentrations of control antibody or the reference antibody MAB1181, anti-IL-1R7, or IL-18BP and IL-1Ra. Aliquots of the control antibody or anti-human IL-1R7 or IL-18BP and IL-1Ra were freshly diluted in warm RPMI to different concentrations for experiments. For LPS stimulation, 0.5 × 10 6 cells were seeded per well in 96-flat bottom well plates and cultured in a total of 200 μl for 24 h or 200 μl RPMI with 10% FBS for 3 days, with or without 10 ng/ml LPS in the presence of different concentrations of control antibody or anti-IL-1R7 antibody, or IL-18BP or IL-1Ra. For recombinant IL-37 46-218 treated PBMC experiments, IL-37 46-218 were preincubated with either Blank (RPMI medium), or 1 μg/ml anti-IL-37 monoclonal antibody, or 1 μg/ml anti-IL-1R7 antibody for at least 10 min before they were added to the cells for a 1-h pretreatment. After that, the cells were stimulated with 10 ng/ ml LPS for 24 h. For cultures with heat-killed Candida albcans, 0.5 × 10 6 cells were seeded per well in 96-well roundbottom plate and cultured in a total of 200 μl RPMI with 10% FBS for 5 days, with or without Candida (10 6 colonies per ml) (100,101) in the presence of different concentrations of control antibody or anti-IL-1R7 antibody, IL-18BP or IL-1Ra. The antibodies or IL-18BP or IL-1Ra were added 30 min before the stimuli. After incubation times were completed, supernatants were collected by centrifugation at 400g for 5 min and stored at -80 C. Cells remaining in the wells were lysed in 100 μl 0.5% triton-X in water and stored at -80 C for intracellular IL-1α analysis. Human whole blood culture One milliliter of heparinized blood was added to 12 × 75 mm round-bottom polypropylene tubes (Falcon) as described as previously (72) and then 1 ml of RPMI with or without 10 ng/ml LPS was added for stimulation in the presence of different concentrations of anti-IL-1R7 or IL-18BP or IL-1Ra. The antibody or IL-18BP or IL-1Ra was added 30 min before the stimuli. The tubes were closed tightly with the caps and were mixed by inversion. Blood was incubated upright in the sealed tubes at 37 C for 3 days. After incubation at 37 C, the tubes were inverted several times to mix the formed elements, and Triton X-100 was added (5%; Bio-Rad Laboratories) to a final concentration of 1%. The tubes were again inverted several times until the blood was clarified. The lysed blood was frozen at -70 C until assay. HEK-Blue-IL-18 assay HEK-Blue IL-18 cells (InvivoGen) were cultivated in DMEM, 10% FCS, and seeded out in 384-well clear, flat bottom, cell culture treated microplates (Corning) at a cell density of 12,500 cells/well in 15 μl medium. Various concentrations of anti-hIL-1R7 MAB antibodies were added in a volume of 5 μl medium and plates were incubated for 60 min at 37 C/5% CO 2 . Recombinant human IL-18 (MBL Co Ltd) protein was Anti-IL-1R7 suppresses IL-18 pathway added in 5 μl medium to a final concentration of 100 pg/ml and plates were incubated over night at 37 C/5% CO 2 . In total, 5 μl cell supernatants were transferred to clear, flatbottom polystyrene NBS microplates (Corning) containing 20 μl 2xQUANTI-Blue reagent (InvivoGen). Plates were incubated at 37 C for 45 min and optical density was measured at 655 nm using a Tecan M1000 plate reader. Fitting curves and EC 50 calculation were done using GraphPad Prism 8. A549-hIL-1R7/9 assay Human lung epithelial A549 cells stably transfected with hIL-1R7 and hIL-1R9 encoding genes were cultured in Ham's F-12K medium containing 10% FCS. In total, 12,500 cells per well were seeded into a 384-well clear, flat-bottom, cell culture treated microplate (Corning) in 15 μl medium. After 24 h at 37 C, 5% CO 2 , medium was removed and cells were washed three times with 25 μl 1× PBS, 0.05% Tween and then resuspended in 15 μl growth medium. Antibodies were added at different concentration in a volume of 5 μl and incubated with the cells for 60 min. hIL-18 recombinant protein (MBL Co Ltd) was then added to a final concentration of 10 ng/ml in a volume of 5 μl. Cells were incubated for 6 h at 37 C/5% CO 2 . The human IL-6 cytokine concentration in the cell supernatant was determined using the DuoSet ELISA (Bio-Techne) according to the manufacturer's instructions. Fitting curves and EC 50 calculation were done using GraphPad Prism 8. KG-1 IFNγ release assay KG-1 cells were cultured in RPMI 1640 medium containing 20% FCS and 2 mM L-glutamine. In total, 13500 KG-1 cells per well were seeded into a Corning 384 Well Clear Flat Bottom Polystyrene NBS Microplate in a volume of 15 μl. Antibodies were added at different concentration in a volume of 7.5 μl medium and incubated with the cells for 60 min at 37 C, 5% CO 2 . hIL-18 and TNFα recombinant protein (Bio-Techne) were then added in a volume of 7.5 μl at a final concentration of 5 ng/ml and 10 ng/ml, respectively. Cells were incubated for 48 h at 37 C, 5% CO 2 . Human IFNγ concentrations in the cell supernatant were determined using the DuoSet Human IFNγ ELISA kit (Bio-Techne) according to the manufacturer's instructions. Fitting curves and EC 50 calculation are done using GraphPad Prism 8. A549-hIL-1R7 cell culture Human A549 cells stably overexpressing IL-1R7 or IL-18 receptor β chain were cultured in F12-K culture medium (Cellgro) supplemented with 10% FBS as described before (68). In total, 50,000 cells were seeded in 96-well flat-bottom cell culture plate and pretreated with or without different concentrations of anti-IL-1R7 or IL-18BP for 30 min. The cells were further stimulated with 50 ng/ml IL-18 or 1 ng/ml IL-1β for overnight before the supernatant was collected for IL-6 measurement. Cells remaining in the wells were lysed in 100 μl 0.5% Triton-X in water for intracellular IL-1α measurement. Statistical analysis Significance of differences was evaluated with Student's twotail t-test. The IL-18, IL-12/IL-18, LPS, or other inflammatory stimulus-induced cytokine production in cells without pretreatment was set at 100% unless specified. The mean percent change for each condition was calculated for each group. The data shown represent the mean ± SD. Data availability All data are contained within the article and available upon request. Supporting information-This article contains supporting information.
2021-04-03T13:10:13.838Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "15200538b283549e95ec240087605bab39330705", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925821004166/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f8f8753cd142b2436fc38e7e84733c6b25df701", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246189286
pes2o/s2orc
v3-fos-license
Research on an Anomaly Detection Method for Physical Condition Change of Elderly People in Care Facilities Currently, the shortage of care workers for the elderly has become a big problem, and more streamlined care operations are needed. In care facilities, care workers are required to use their subjective experience to detect anomalies in physical condition of care receivers, including serious or insignificant deterioration or behavioral and psychological symptoms of dementia, which can decrease the work efficiency. Therefore, we aim to create a model using objective data for detecting anomalies in physical condition. In this study, data from 13 subjects in a care facility were collected, and isolation forest models were constructed for each subject. The subject ʼ s anomalies in physical condition were documented in a care record by a nurse and used as reference for model evaluation. Recall and specificity were used to evaluate the model, expressed as the per-centage of detection success for abnormal or normal conditions. Data collected for 1 to 60 days were used to train the isolation models, and the relationship between the amount of training data and model performance was simulated. Heart rate, respiratory rate, and time of getting out of bed were collected from a sensor placed on the subject ʼ s bed and used as the model features. In addition, dietary intake information was collected from the care record. Analysis of the evaluation results showed recall and specificity of 45.6 ± 46.7% and 83.88 ± 6.06%, re-spectively, for the model constructed using training data of 60 days. For future studies, we will continue to collect data and increase the number of participants to improve the robustness and accuracy of the proposed anomaly detection system. Introduction Currently, the shortage of care workers for the elderly has emerged as a big problem; thus, more streamlined care operations are necessary. In care facilities, workers are required to detect anomalies in physical condition of residents, including serious or insigni cant deterioration or behavioral and psychological symptoms of dementia (BPSD), based on their own subjective experience or through self-reporting by the persons being cared. How-ever, as elderly people in care facilities are frequently affected by dementia, they may nd it dif cult to selfreport their physical condition. In addition, subjective judgement mainly using visual information may cause variability between caregivers, leading to decreased work ef ciency. Therefore, a quantitative and objective judgement system can be an effective solution for streamlining care operations and shortage of care workers. In previous studies, Internet-of-Things (IoT) or multiple sensors have been used to collect data on daily activities or risk of deterioration [1][2][3][4][5][6][7][8]. In these studies, various kinds of sensors were used and the usefulness of the collected data for monitoring changes of daily activities or detecting risk for elderly peoples were demonstrated [7]. For example, the study that collected data of daily TV use by an elderly woman for 10 years suggested a trend of correlation between television usage and social activities [8]. Meanwhile, some researchers used wearable sensors to estimate lower limb muscle strength [9] or evaluated the extent of frailty [10]. In addition, models for detecting danger pose [11] and anomalies of health data [12] were constructed in previous studies. However, care facilities may nd it dif cult to intro-duce multiple sensor systems, wearable sensors, or video cameras for long durations in the operation system. Multiple sensor systems entail high costs, and complex data gathering system can be a burden for care facilities. Wearable sensors may carry a risk of accidental ingestion or breakage for dementia patients, and video cameras may cause privacy problems. Therefore, we decided to use a single sensor placed in the bed, which does not require wearing or video data. In addition, several models constructed in previous studies tried to detect whether subjects have illness by comparing the values collected in two periods [5,10]. However, for facility use, a daily anomaly detection system that alerts abnormal condition for each day and for each subject is needed. Therefore, we conducted a fourmonth demonstration experiment in a real-world care facility and try to output whether the subject were in abnormal or normal condition every day. Then, the accuracy of the anomaly detection model was evaluated from the output result for every subject. In our study, we constructed an anomaly detection model to detect change in condition for each day, using a single pressure sensor to be used in bed. The pressure sensor can collect the heart rate, respiratory rate, and the status of whether the subject was in bed or out of bed. Then, using the collected data, we constructed an isolation forest model [13] that can detect any change in the subject s physical condition. Throughout the experiment, the model was updated with gathered data three times. Changes in physical condition were recorded by a facility nurse and used as the correct answers for accuracy evaluation. Further, to analyze the relation between the amount of training data and accuracy, we constructed several simulation models with training data of different numbers of days (1,3,5,7,14,30, and 60 days), and the accuracy was calculated. The paper is structured as follows. In Section 2, we describe the participant groups, sensor, and method for processing the sensor signals as features. In Section 3.1, we present the evaluation of the accuracy of the proposed model in a facility environment. In Section 3.2, the results of the simulation model are presented, and the result of analysis of the relation between the amount of training data and accuracy is shown. Section 4 presents the discussion, and Section 5 states the conclusions. Subjects and sensors A total of 31 residents in an elderly care facility were recruited. Data from 13 subjects (aged 90.23 ± 4.82 years, 12 were female and 1 was male) whose changes in physical condition were recorded by the facility nurse were used in model evaluation. All subjects were affected by dementia, and two subjects had a history of stroke. This study was conducted in accordance with the ethical principles of the Helsinki Declaration. This study was approved by the Panasonic Healthcare Ethical Review Board. Informed consent was obtained from the subjects before the monitoring was started. A pressure sensor (SleepAce RestOn Z400T, Shenzhen Medica Technology Development Co.) was placed on each bed to gather biological data when the subject was in bed. The collected data included the heart rate, respiratory rate, and whether the subject was in bed or out of bed. Figure 1 shows the features calculated from the sensor data and from the care record. We used both sensor-and care record-related features. The features and models were processed for each subject individually. First, the heart rate and respiratory rate data and in or out of bed data were collected by the sensor at 1 Hz. Then, using the in or out of bed data, the absence time per hour was calculated. If the absence time was less than 10 min/h, the heart rate and respiratory rate features were processed from the corresponding 1-hour data. If the absence time was over 10 min/h, the 1-hour data was excluded. The absence time was also used as a feature. Features and models for anomaly detection Using the heart rate and respiratory rate data, the mean, maximum, minimum, standard deviation, kurtosis, skewness, and impulse factor were calculated as features. In addition, the differential value of sensor-based collected data was calculated for both heartrate and respiratory rate. Using differential value data, mean, maximum, minimum, standard deviation, kurtosis, skewness, and impulse factor were calculated as difference valuebased features. Dietary intake for each meal was recorded by a nurse or care staff using a scale of 1 to 10. The summary value of dietary intake for 1 day was used as a feature and was up sampled to 1-hour data. Recording of dietary intake is (11) one of the daily works in this facility. Therefore, the assessment method was shared between care workers and nurses. The processed features were accumulated in a database and used as training data to construct an isolation forest model. The model can calculate anomaly scores for each hour even without correct labels. Figure 2 shows the method for calculating a 5-level anomaly score, which is the nal output of our anomaly detection system. The features from the sensor or care record were rst input into the constructed model, and the mean anomaly score for one day was calculated. Then, the moving average of 3-day value was processed. Moving average values were accumulated and used to calculate thresholds for scaled score, calculated using the amount of statistics as follows: average, average-0.5 sd, average-sd, and average-2 sd. Finally, the 5-level scaled scores were used as the output of our system. In our system, a score of 4 or 5 indicated an anomaly. Veri cation test in a care facility We started to collect sensor data in the facility from October 2020, and the anomaly detection system was in operation from January to April 2021. Initially, the anomaly detection model was trained with data collected from all the subjects from October 2020. From January 18, an individually trained model for each subject was introduced. The isolation forest model was updated on February 16 and March 10 using the collected features as training data. The nursing staff in the care facility performed veri cation by recording whether the subject s physical condition was captured by the 5-level scaled score. Using the nursing record as reference, the recall and speci city of the individual models (starting from January 18) were evaluated. Recall and speci city were calculated from the number of cases of detected or recorded normal/abnormal conditions of participants, such as TP/TN/FP/ FN shown in Table 1. Recall shows the detection ability of abnormal conditions, and speci city shows the detection ability of normal conditions. The de nition of recall and speci city were shown in (1) and (2). Simulation test with changing amount of training data In the facility test, we updated the isolation forest model three times using the collected features. In addition, IoT sensors were placed in the subjects rooms gradually, and the amount of collected data and training data for facility test models were different among subjects. Thus, the accuracy of the facility test can be affected by changes in the model and the amount of training data. For example, if the amount of training data was too small, data distribution became sparse, causing decreased accuracy. On the other hand, if the amount of training data was too large, masking of abnormal data may occur, and detection ability will decrease. To verify the relation between the amount of training data and model performance, we constructed seven simulation models using the data collected from the facility test. For the simulation test, data from January to April were used for training and testing. Features collected for 1, 3, 5, 7, 14, 30, and 60 days were used as training data, and then the recall and speci city were calculated as evaluation indices for each model. Model performance in the facility test The recall and speci city in the facility test were 66.03 (± 46.46) and 80.44 (± 12.20), respectively. The values were calculated from the care record reported by nurses and 5-level scaled score, which was used from January 18, 2021 to April 14, 2021. Recall and speci city represent the detection abilities for abnormal and normal conditions, respectively. During the facility test period, the isolation forest model was updated three times: February 15, March 8, and April 14. In the facility model, the standard deviation of recall is larger than that of speci city, which indicates that the detection of physical anomaly had individual differences among subjects. Figure 3 shows the time series variation of a subject affected by aspiration pneumonitis and success of the system to detect the change in condition. From February 19 to 20, anomaly scores could not be calculated because of sensor error. The subject had serious accidental swallowing on February 20. She developed fever after the incident, and was admitted to hospital on February 24 with aspiration pneumonitis. He body temperature or scaled anomaly score was normal before the accident occurred. Before the swallowing accident, the scaled anomaly score continued to show level 1 or 2 indicating normal condition. Then, after the accident, scaled score increased to level 5, which indicated large physical anomaly from the time aspiration occurred to before being admitted to the hospital, and coincided with the declining condition of the subject. In this case, drastic change of subject s condition was detected by our model. Figure 4 shows the relation between the amount of training data and detection ability of a simulated model for anomaly or normal condition. The value of recall or speci city in Fig. 4 is the mean value of model evaluation index for each subject. When recall and speci city of the 1-day model were compared with those of other models by Dunnett s test, p values showed no signi cant difference between models. In Fig. 4(a), when data of 60 days were used for training, the recall obtained was 45.56% (± 46.67). On the other hand, as seen in Fig. 4(b), speci city increased with an increase in amount of train- (13) ing data. The highest mean speci city was observed in the model that used training data of 60 days, with a value of 83.88% (± 6.06). Discussion In our research, we validated the ef cacy of the anomaly detection model using an IoT sensor, which was used for collecting data and updating models in the facility test. Sensors were introduced gradually from October 2020, and the facility testing operation was started in January 2021. The test system was updated three times during the testing period. During the facility test, the anomaly detection models changed with each update, which can affect the accuracy. Therefore, to evaluate the relation between model accuracy and amount of training data, we constructed models using training data of different durations collected during the facility test and calculated recall and speci city as indices of the detection ability for abnormal or normal condition. The mean values of recall and specicity of the facility test system were calculated as 66.03% (± 46.46) and 80.44% (± 12.20), respectively, and the recall and speci city in the simulation test using data of 60 days for training were 45.56% (± 46.67) and 83.88% (± 6.06), respectively. In care facilities, body temperature is commonly used to detect changes in condition including infections. From the guideline for longterm care facilities developed by the infectious diseases society of America, the recall of detecting infections for elderly people in care facility was 70% [14]. Therefore, success detection rate of 70% may be one of the goal of accuracy for condition detecting system. In our system, there were individual difference in accuracy, especially recall in 60-day simulation models. Models for some subjects in the facility test or the simulation test were able to detect changes in condition with 100% recall or speci city, but some models from other subjects did not ful ll our goal of 70%. These differences in accuracy may be caused by the difference in the type of anomaly or attribution of participants. In future study, we will try to classify the type of change in condition and evaluate accuracy in various types of anomaly. In addition, in dementia patients, the type or severity of dementia may affect sleep behavior or night-time emotional behavior [15,16] In our anomaly detection system, sensors were installed on the bed. Therefore the change of sleep behavior or BPSD at night could change the trend of features in sensor data, and it might affect accuracy. We will analyze the relation of system accuracy with subject attributions that include sex, age, motor function, ADL, history of anamnesis, and severity of dementia. The recall values were different between the facility test and simulation test. In addition, standard deviation was large in both tests, which indicated that the models constructed had different abilities in detecting abnormal condition in different subjects. Because the facility test and simulation models used data from October 2020 and January 2021, respectively, for training, the difference in the data collection periods may affect the recall value. In addition, although there were only a few cases of anomalies in physical condition, recall may be easily affected by even a small change in the model. Therefore, to evaluate recall, which represents the ability to detect an abnormal condition, a larger number of subjects with anomaly conditions are required. For future study, we will continue to collect sensor data in the same facility to evaluate the recall value accurately. Speci city increased to nearly 80% in both facility and simulation tests, and standard deviation was smaller than that of recall. Therefore, the system we developed was able to detect normal condition for each subject. As shown in Fig. 4(a), recall and speci city did not change signi cantly among the various models. This result indicates that masking of data with anomaly, which is often caused by large amount of training data in isolation forest, is not a problem in the system we have developed. Therefore, longer training data might be needed to improve accuracy signi cantly. To improve our model, we are now collecting longer data for future study. While the system was being tested in the facility, one subject was affected by aspiration pneumonitis and the model constructed was successful in detecting the change in condition. Subsequently, the scaled score continued to indicate an abnormal condition before the subject was admitted to the hospital. Therefore, it is evident that the scaled score output from the system accurately re ected the physical condition of the subject. Similar to this case, severe outcomes may be averted if the anomaly detection system can detect subjects with abnormal condition and alert the care staff or detect diseases in an early phase. In the future, we will continue to collect sensor and care record data in facility operations and con rm if our anomaly detection system contributes to improving the ef ciency of care operations. Conclusion In this study, we constructed an anomaly detection model to detect changes in physical condition of care receivers using an IoT sensor, aiming to improve the ef ciency of care operations. Using the system developed, a facility operation test was conducted for 4 months. We obtained recall of 66.03% (± 46.46) and speci city of 80.44% (± 12.20). In addition, our anomaly detection system succeeded in detecting the abnormal condition of a subject affected by aspiration pneumonitis. The novel system was able to warn about the change in the subject s physi-cal condition to the facility care staff with quantitative data. In addition, simulation tests using training data of different durations were conducted. The recall and speci city of the simulation model using training data of 60 days were 45.56% (± 46.67) and 83.88% (± 6.06), respectively. There were no signi cant decreases in accuracy between models, which showed that there was no masking problem caused by excessive training data. In order to improve model, collection of more training data is needed for next study. Owing to the availability of few records on abnormal condition of participants, more cases are required to accurately evaluate the ability of our system to detect abnormal conditions. On the other hand, in both the facility and simulation tests, the difference in speci city between subjects was small, which shows the stability of the system in detecting normal conditions. For future studies, we will continue to collect data and increase the number of participants to improve the robustness and accuracy of the proposed anomaly detection system.
2022-01-23T16:11:16.807Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4949ff97a2e499fd4117a39bb7cc68a6c3f40ac3", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/abe/11/0/11_11_10/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "720d5662f4b05c5f34e7fcbc319f0416ccd839b3", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
239470620
pes2o/s2orc
v3-fos-license
Effects of Somatic, Depression Symptoms, and Sedentary Time on Sleep Quality in Middle-Aged Women with Risk Factors for Cardiovascular Disease Cardiovascular disease (CVD) is the second leading cause of death among Korean women, and its incidence is dramatically elevated in middle-aged women. This study aimed to identify the predictors of sleep quality, a CVD risk factor, in middle-aged women with CVD risk factors to provide foundational data for developing intervention strategies for the prevention of CVD. The subjects, 203 middle-aged women (40–65 years old) with one or more CVD risk factors were selected through convenience sampling and included in this descriptive correlational study. The effects of somatic symptoms, depression symptoms, and sedentary time on sleep quality were examined. CVD-related characteristics were analyzed using descriptive statistics, whereas the mean values of the independent variables were analyzed using t-tests and analysis of variance. Predictors of sleep quality were analyzed using multiple regression analysis. The results showed that sleep quality increased with decreasing somatic symptoms (β = −0.36, p < 0.001), depression symptom score (β = −0.17, p = 0.023), and daily sedentary time (β = −0.13, p = 0.041), and the regression model was significant (F = 19.80, p < 0.001). Somatic symptoms are the most potent predictors of sleep quality in middle-aged women. Thus, intervention strategies that improve somatic symptoms are crucial for the enhancement of sleep quality, which deteriorates with advancing age. Introduction According to the World Health Organization (WHO), 31% of deaths worldwide are due to a cardiovascular disease (CVD) [1], and one out every five women in the United States dies from CVD [2]. In 2018, cancer had the highest mortality rate (22%) in Korean women, followed by cardio-cerebrovascular disease (20%), which is 3% higher than that reported in Korean men (17%) [3]. These statistics highlight the need for adequate interventions for the prevention of CVD. CVD risk factors include diabetes mellitus (DM), obesity, hypercholesterolemia, and hypertension [4], which are more common in men than in women before middle adulthood; however, after menopause, the prevalence of these risk factors dramatically increases among women compared to the prevalence among men [5]. The reason for the drastic increase in CVD risk factors after menopause is that estrogen deficiency and elevated adrenaline caused by menopause lead to structural and functional changes in the cardiovascular system; these changes increase visceral fat and systemic inflammation, which induce hypertension, impaired glucose tolerance, and increased abnormal lipid and insulin resistance, which then lead to considerable increase in the development of CVD risk factors [5,6]. Somatic symptoms, such as sweating and hot flashes, which commonly occur in menopause as a result of hormonal changes in middle adulthood, deteriorate sleep quality and decrease sleep duration [7][8][9]. Deterioration of sleep quality and duration further increases the risk for developing cardiometabolic risk factors [10], including obesity, hypertension, type 2 DM, and the risk for coronary artery disease [11]. In addition, these hormonal changes lead to insomnia, obstructive sleep apnea, and restless leg syndrome, which diminish sleep quality and increase cardiovascular mortality in women with sleep apnea [12]. The incidence of depression in relation to insomnia and sleep deprivation is higher among women than among men [12]. Depression and somatic symptoms hinder early detection of CVD symptoms in middle-aged women [13][14][15], thus delaying treatment [16]. Moreover, women with depression may have a more sedentary lifestyle with reduced physical activity [17], which further increases the risk for CVD [18]. Thus, there is a growing emphasis on the need for different CVD prevention approaches for men and women due to sex-specific differences in the epidemiology, pathophysiology, clinical management, and outcomes of CVD [19]. The aim of this study was to identify the predictors of sleep quality, a CVD risk factor, in middle-aged women who have a risk for CVD to provide foundational and valuable data for the development of sex-specific interventions and strategic programs for the prevention of CVD in the future. Research Design This was a descriptive correlational study conducted to examine the effects of somatic symptoms, depression, and sedentary time on sleep quality in middle-aged women with CVD risk factors. Setting and Sample Middle-aged women aged 40-65 years who live in Seoul or two other large cities in South Korea and have at least one CVD risk factor were selected for this study through convenience sampling. Individuals who met at least one of the following criteria proposed by the American Heart Association [20] were selected: overweight (body mass index (BMI) ≥25 m 2 ) and particularly those with central adiposity; diagnosis of hypertension or pharmacological therapy for hypertension, diabetes, and symptoms of CVD; diagnosis of hyperlipidemia or pharmacological therapy for hyperlipidemia; family history of CVD; and minimum physical activity (30 min of moderately intense exercise for <5 days per week). Patients were excluded if they met the following criteria: previous diagnosis of cardio-cerebrovascular disease (e.g., myocardial infarction, stroke, cerebral hemorrhage); diagnosis of depression, anxiety, mental disorder, or cognitive disorder, as determined by a psychiatrist; history of hypothyroidism, previous sinus surgery, or autoimmune diseases; and night shift workers, as this work pattern can affect the quality of sleep. Sample Size The sample size of the cohort was determined using a general power analysis program [G*power 3.1, Heinreich-Heine-Universität, Düsseldorf, Germany]. For multiple regression with a medium effect size (f2) of 0.15, power (1-β) of 0.80, significance of 0.05, and eight independent variables (factors associated with the risk for CVD in previous studies (age, hypertension, DM, BMI, sleep duration) and the parameters of the present study (somatic symptoms, depression, sedentary time), the minimum sample size was calculated to be 109. Considering a 20% withdrawal rate, 147 participants were recruited. A total of 203 participants were recruited; thus, the minimum sample size was met. Data Collection Data were collected from middle-aged women who visited a private clinic in Seoul, Pyeongtaek-si, Cheong Ju, or Guri-si, or from those working for a production company (number of employees ≥500) between May 2017 and December 2018. The data were collected with the cooperation of a health nurse and the nurse in charge at the clinic. One of the authors and one CVD nurse explained the purpose of the study to the participants, distributed the open-ended and closed-ended questionnaires, and retrieved them immediately after they were completed. Anthropometric measurements were taken after the questionnaires were completed. General and Anthropometric Characteristics of the Participants The education, occupation, medical history, current illness, family history, and perceived health of the participants were surveyed. Height and weight measurements were recorded based on self-reported measurements taken in the past month or the measurements the participants recalled. These measurements were used to calculate BMI. To calculate the waist-to-hip ratio (WHR), waist circumference was first measured by gently resting a tape measure (82203-rondo, Korea) around the midpoint between the lowest rib and upper iliac crest while the participant stood upright on a flat surface with legs 25-30 cm apart after breathing out comfortably. Hip circumference was measured by gently pulling the tape measure around the most protruding part of the hip (method proposed by the WHO). The measurements were taken by one of the authors and one nurse using a tape measure from the same manufacturer. The WHR was then calculated using these measurements. The cutoff for WHR was 0.8 or higher [21] Sleep duration, which was defined as the average sleep duration per week including naps, was evaluated. The participants were also asked how much time they spent per day being sedentary, excluding sleep time. Quality of Sleep The Korean Version of the Modified Leeds Sleep Evaluation Questionnaire was used to evaluate sleep quality. This 10-item tool evaluates getting to sleep, awakening from sleep, perceived quality of sleep, and behavior after waking. Each item is rated on a 10-point scale, and the total score ranges from 0 to 100. The optimal cutoff point is 67; a score of 67 or lower indicates poor sleep quality, whereas a score of 68 or higher indicates good sleep quality [22]. The Cronbach's alpha for this tool was 0.88 at the time of development [23] and in this study. Somatic Symptoms Somatic symptoms were measured using the Symptom Checklist-90-Revision-Somatization (SCL-90R-SOM) (Central Aptitude Publishing Department, 1984, Korea) tool derived from the Korean version of the Mini-Mental State Exam. The SCL-90R-SOM comprises 12 items about somatization symptoms, which are rated on a four-point Likert scale. The total score ranges from 0 to 48, with a higher score indicating more severe somatization symptoms [24]. The reliability of the tool (Cronbach's alpha) was 0.93 at the time of development [25] and 0.88 in this study. Depression Symptoms The Patient Health Questionnaire-9 was used to evaluate depression. This tool assesses the frequency of symptoms described in each item over the past two weeks. The questionnaire contains nine items rated on a four-point scale from 0 (never) to 3 (almost every day). The total score ranges from 0 to 27, with a score of 10 or higher indicating clinical depression. The reliability of the tool (Cronbach's alpha) was 0.95 at the time of development [24][25][26] and 0.82 in this study. Ethical Considerations The study protocol was approved by the Institutional Review Boards of Hanyang University (HYI-14-118-3) and Kyungdong University (1041455-202012-HR-010-01). The participants were informed that all personally identifiable information will be removed from the data used for analysis to protect their identity. Written informed consent was obtained all subjects involved in the study. The collected data were placed in a locked cabinet that could only be accessed by the researchers. The participants were informed that the data would be shredded for disposal upon completion of the study. The anthropometric measurements were taken in a relaxed environment in a counseling room to ensure privacy. Participants who completed the survey were given a small gift. Statistical Analysis The collated data were analyzed using SPSS 21.0 (IBM SPSS Statistics, Chicago, IL, USA). The general and CVD-related characteristics of the participants were analyzed using descriptive statistics, whereas the differences in sleep quality according to the independent variables (depression and somatic symptoms) were analyzed using t-tests and analysis of variance. Correlations among sleep quality, general characteristics, and independent variables were analyzed using Pearson's correlation coefficient. Prior to the identification of the predictors of sleep quality, multicollinearity among the independent variables was tested in step 1 using the tolerance and variance inflation factor (VIF). In step 2, multiple linear regression was performed to identify the independent predictors of sleep quality and to examine the percentage of variance explained by each factor. The reliability of the tools used in this study was evaluated using Cronbach's α. Statistical significance was set at p < 0.05. General Characteristics of the Participants The mean age of the good and poor sleep quality groups was 55.74 ± 6.33 years and 53.67 ± 6.93 years, respectively. The mean ages of the two groups were significantly different (t = −2.21, p = 0.028). Regarding occupation, 29 (27.4%) and 21 (22.1%) participants in the poor sleep quality group were manual laborers and service workers, respectively, whereas 63 (52.7%) participants in the good sleep quality group were manual laborers (x 2 = 14.52, p = 0.006). Type of occupation significantly differed between the two groups. However, perceived health status was not significantly different between the two groups (F = 0.73, p = 0.693) ( Table 1). Cardiovascular Disease-Related Characteristics of the Participants CVD risk factors and related variables and smoking were significantly associated with sleep quality (x 2 = 6.91, p = 0.032). Daily average sedentary time was 5.21 ± 3.14 in the good sleep quality group and 7.24 ± 8.16 in the poor sleep quality group. Sedentary time was significantly different between the two groups (t = 2.33, p = 0.021). Sleep duration also significantly differed between the two groups (x 2 = 5.35, p = 0.021); 44.7% of participants in the good sleep quality group and 28.9% of those in the poor sleep quality group reported an adequate sleep duration of 7-8 h. Depression symptoms were significantly higher in the poor sleep quality group than in the good sleep quality group (t = 5.35, p < 0.001). The somatic symptom score was also significantly higher in the poor sleep quality group than in the good sleep quality group (t = 4.19, p < 0.001). There was no significant difference in the age at menopause and the presence or absence of menopause between the two groups (t = −0.19, p = 0.848) ( Table 2). Correlation between Sleep Quality and the Independent Variables Analysis of the correlation between sleep quality and the independent variables showed that sleep quality was negatively correlated with somatic symptoms (r = −0.47, p < 0.001), depression symptoms (r = −0.39, p < 0.001), and daily average sedentary time (r = −0.15, p = 0.041). Depression was positively correlated with somatic symptoms (r = 0.57, p < 0.001) ( Table 3). Table 3. Correlation between sleep quality and the independent variables (N = 204). Variables. (1) Correlation coefficients and p-values of the nominal items. * Correlation is significant at 0.05 (two-tailed analysis). ** Correlation is significant at 0.01 (two-tailed analysis). Comparison of Somatic Symptoms in the Two Groups Except for foreign body sensation in the throat, all somatic symptoms significantly differed between the two groups (p-value, 0.000-0.031). The most common symptom was low back pain, with a score of 1.54 in the good sleep quality group and 1.80 in the poor sleep quality group. The second most common symptom was heaviness in the legs, with a score of 1.42 in the good sleep quality group and 1.42 in the poor sleep quality group. Having a foreign body sensation in the throat was the least common symptom in both groups, with no significant difference between the two groups (p = 0.088) (Figure 1). Predictors of Sleep Quality in Middle-Aged Women with CVD Risk Factors To test the assumptions of the regression, the autocorrelation (independence) of the independent variables was tested using the Durbin-Watson statistic. The results confirmed the absence of autocorrelation. Regarding multicollinearity, tolerance (0.75-0.98) was below 1.0, whereas VIF (1.01-1.33) was below 10, confirming the absence of multicollinearity. Regression analysis was performed to confirm whether somatic symptoms, depression, and sedentary time, which were significant in the univariate analysis, were independent predictors of sleep quality. The regression model was significant, with the independent variables explaining 24% of the variance (F = 19.80, p < 0.001). In this model, sleep quality increased with decreasing somatic symptom score (β = −0.36, p < 0.001), depression symptom score (β = −0.17, p = 0.023), and daily average sedentary time (β = −0.13, p = 0.041) ( Table 4). Predictors of Sleep Quality in Middle-Aged Women with CVD Risk Factors To test the assumptions of the regression, the autocorrelation (independence) of the independent variables was tested using the Durbin-Watson statistic. The results confirmed the absence of autocorrelation. Regarding multicollinearity, tolerance (0.75-0.98) was below 1.0, whereas VIF (1.01-1.33) was below 10, confirming the absence of multicollinearity. Regression analysis was performed to confirm whether somatic symptoms, depression, and sedentary time, which were significant in the univariate analysis, were independent predictors of sleep quality. The regression model was significant, with the independent variables explaining 24% of the variance (F = 19.80, p < 0.001). In this model, sleep quality increased with decreasing somatic symptom score (β = −0.36, p < 0.001), depression symptom score (β = −0.17, p = 0.023), and daily average sedentary time (β = −0.13, p = 0.041) ( Table 4). Discussion This study was conducted to identify the predictors of sleep quality that increase the risk for CVD in middle-aged women with pre-existing CVD risk factors. Our results showed that somatic symptoms are the most potent predictors of sleep quality. This finding is consistent with the results of a cohort study of 10,000 participants aged 35-74 years old who were followed up for five years from 2007 to 2012. The results of that study indicated that somatic symptoms and pain are predictors of sleep quality in both men and women [27]. Furthermore, a study of 236 Hong Kong families (224 mothers and 196 fathers; mean age, 47 years old) demonstrated that somatic symptoms and pain are important predictors of sleep quality, a result which supports our findings [28]. Pain, repeatedly identified as a predictor of sleep quality, is also a somatic symptom. In the present study, numbness in the legs and heaviness or pain in the extremities were significantly more common in the poor sleep quality group than in the good sleep quality group. This result also supports the findings of previous studies. Somatic symptoms were positively correlated with depression and sedentary time in the present study. This is consistent with findings of a previous cross-sectional study of 960 Korean adults aged 45 years or older, in which somatic symptoms were positively correlated with depression symptoms and negatively correlated with physical activity [29]. In the present study, depression symptoms were the second most potent predictors of sleep quality. This finding is similar to that of a study of 817 elderly people, in which the prevalence of depression increased with deteriorating sleep quality [30]. It is well known that sleep quality ultimately influences sleep duration [31]. In the present study, 77.1% of participants in the poor sleep quality group and 55.3% of those in the good sleep quality group had an inappropriate sleep duration (<7 h or >8 h). This is consistent with the results of a 2012-2013 survey of 11,276 adults in Northeast China aged ≥35 years old that demonstrated increased prevalence of depression among those who had too short (≤6 h) or too long (≥9 h) sleep duration [32]. Notably, women experience depression more frequently than men [19]. A four-year follow-up study of 93,676 postmenopausal women showed that depression increases the incidence of CVD even after adjusting the general risk factors of CVD [33]. The results of these previous studies and those of the present study suggest that depression must not be neglected and must be appropriately managed to reduce the occurrence of somatic symptoms. According to the "Fasa PERSIAN Cohort Study" of 10,129 Iranian subjects aged 35-70 years, a shorter sleep duration (≤6 h) was associated with a 1.2-fold increased incidence of CVD. This finding was consistent with that of the Framingham risk score for short sleepers [34]. In this study, the majority (61%, 126 participants) of middle-aged women with CVD risk factors had poor sleep duration (<7 or >8 h). This emphasizes the need for interventions to improve women's sleep quality. Depression has been reported to be positively correlated with sedentary time [17,29,35]. Our findings showed that sedentary time and depression are positively correlated. We also found that increased sedentary time contributes to deterioration of sleep quality. This is consistent with the findings of the Sleep in America poll of 843 adults, in which sleep quality was found to decline with increasing screen time (i.e., television viewing and computer use during leisure time) [36]. An increase in sedentary time not only affects sleep quality, but also increases the incidence of chronic disease and CVD [36][37][38]. Thus, aggressive interventions that promote physical activity in middle-aged adults are necessary to prevent and reduce CVD morbidity. In addition, improving sedentary lifestyle habits and increasing physical activity is important for the enhancement of sleep quality, which has been found to reduce depression, somatic symptoms, and CVD risk factors in women [39][40][41]. However, women generally engage in less physical activity and demonstrate low compliance compared to men [42]. In a study of 846 individuals with CVD risk factors and normal individuals, the percentage of individuals who met the recommended 600-1500 MET of physical activity was 28% in the CVD risk group and 31% in the control group, indicating that people with CVD risk factors engage in less physical activity than healthy individuals [43]. Thus, physical activity must be promoted among women to prevent CVD. Although we did not directly measure the amount of physical activity among our participants, the poor sleep quality group had markedly higher sedentary time than the good sleep quality group. Increased sedentary time further elevates the risk for CVD in people with poor sleep quality. Thus, sex-specific intervention programs that facilitate physical activity are needed to enhance sleep quality and lower CVD morbidity among individuals with CVD risk factors. However, a prior study revealed that work-related sedentary time is not a consistent risk factor for CVD [44]. This disparity in the findings of previous studies indicates that further research is needed to clarify the association between sleep quality and the incidence of CVD according to sedentary time in women. This study has a few limitations. First, this study was focused on understanding the relationships among the variables, and the participants were selected using convenience sampling. Second, patients with sinus hypertrophy, obstructive sleep apnea, and narcolepsy, which directly influence sleep quality, could not be excluded in advance. Hence, the results have limited generalizability. Thus, future multicenter studies are needed to corroborate the findings of this study. Finally, this study is a descriptive survey; therefore, causality among the variables could not be established. Thus, intervention-based longitudinal studies with long follow-up periods focused on the enhancement of sleep quality in women are needed in the future. Nevertheless, this study is significant in that it identified the predictors of sleep quality, a CVD risk factor, and shed light on the relationship between somatic symptoms, depression, and sleep quality in middle-aged women with CVD risk factors. Hence, the results provide valuable foundational data for the development of customized intervention programs and sex-specific strategic education programs aimed at reducing CVD risk factors in women. Conclusions This study was conducted to identify the predictors of sleep quality, a CVD risk factor, in middle-aged women with CVD risk factors. Somatic symptoms, depression symptoms, and sedentary lifestyle were identified as predictors of sleep quality. The results also showed that sleep quality deteriorates with an increase in each variable. The most potent predictor of sleep quality was somatic symptoms. These findings suggest that implementing sex-specific interventions and lifestyle modifications could be effective in reducing the development of certain CVD risk factors. Moreover, the findings provide useful data that can facilitate the planning of intervention strategies to ameliorate the deterioration of sleep quality resulting from hormonal changes in middle-aged women. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Institutional Review Board (IRB) at Hanyang University (HYI-14-118-3). After completion of the survey, the collected data were analyzed after obtaining approval from the IRB at Kyongdong University (1041455-202012-HR-010-01, 30 December 2020). Informed Consent Statement: Written informed consent was obtained from all subjects involved in the study. Conflicts of Interest: The authors have no conflict of interest to declare.
2021-10-17T15:11:23.561Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "11973c84ec8ed4aa04708c5a054c847b417a2609", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/9/10/1378/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d397364665af4c01f1ab6acab6093f0aa4f2ee21", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
133109031
pes2o/s2orc
v3-fos-license
The dynamics of shoreline change analysis based on the integration of remote sensing and geographic information system (GIS) techniques in Pekalongan coastal area, Central Java, Indonesia Coastal areas are found in the dynamic zone at the interface between the three major natural systems of the Earth's surface. The phenomenon of shoreline change is one of the most frequent problems encountered in the coastal environment and is caused by natural processes that result in dynamic changes in the coastal area. Coastal area change can affect the vulnerability of the coastal environment and its properties, such as shoreline stabilization, flood control, sediment retention, natural protection and others. The method of integrating remote sensing data with geographic information system (GIS) techniques has been widely used to monitor and analyze the dynamics of shoreline change in coastal areas. The purpose of this study is to map and analyze the dynamics of shoreline change from 1978 to 2017 in the study area. An approach combining spectral value index and visual interpretation of Landsat images was used and proposed to indicate the separation of land and water bodies, for shoreline extraction. The normalized difference water index (NDWI) can be used as a spectral value index approach for differentiating land and water bodies. Furthermore, the analysis of shoreline changes was performed using the digital shoreline analysis system (DSAS). Based on calculations made using DSAS, it can be seen that the pattern of coastline change tends to be dominated by offshore erosion. The results of this study may also be important as input data for coastal hazard assessment as part of the effort to overcome the problem of flood tides. Introduction The phenomenon of shoreline change is one of the most frequent problems encountered in the coastal environment and is caused by natural processes that result in dynamic changes in coastal area (Thomas et al., 2015;Burningham and French, 2017). Coastal area change can affect the vulnerability of the coastal environment and its properties, such as shoreline stabilization, flood control, sediment retention, natural protection and others Bonetti et al., 2013;Brown et al., 2013). In addition, changes in the coastal environment can also be caused by human construction activities including the development of infrastructure facilities. According to Bird and Ongkosongo (1980) and , the development of seawalls and breakwaters, artificial coastal land reclamation and the removal of coastal materials have direct impacts on changes in the coastal environment. Remote sensing data archives integrated with GIS techniques can be used to track and historically map the dynamics of coastal shoreline change. Remote sensing data integrated with GIS techniques have been widely used to monitor and analyze the dynamics of shoreline changes in coastal areas, including Landsat MSS, TM and SPOT imagery Li and Damen, 2010;Erener and Yakar, 2012), ASTER imagery (Addo et al., 2011;Allen, 2012), Ikonos imagery (Kaichang et al., 2004), Quickbird imagery (Xiaodong et al., 2006) and others. Pekalongan is located in the coastal area of Central Java, Indonesia. Study of the characteristics of dynamic shoreline change is important for the coastal areas of Pekalongan. The purpose of this study is to map and analyze the dynamics of shoreline change from 1978 to 2017 based on the integration of remote sensing and GIS techniques in the study area. This study was conducted as part of the effort to overcome the problem of flood tides, focusing on the exploration and analysis of the dynamic characteristics of coastal shoreline change as one of the causes of tidal flooding in Pekalongan. In addition, the results of this study may also be important as input data for coastal hazard assessment. Study area The study area is located in Pekalongan which is in one of the northern coastal areas of Central Java, Indonesia ( Figure 1) at coordinates 6 o 51' 00" S-6 o 54' 00" S and 109 o 36' 00" E-109 o 43' 00" E. In general, the geological formations in the study area are alluvial deposits derived from rivers and swamps and beaches with a thickness of up to 150 m consisting of gravel, sand, silt and clay (Condon et al., 1996). Data availability In this study, remotely sensed data were used to perform shoreline mapping and analysis. Landsat data, with a resolution of 30 m at Level 1 Geometric (L1G) with sensor MSS (path/row: 120/65) was used as the input data for 1978 while TM sensor data was used for 1988, 2007 and 2011. The ETM + sensor was used as the input data for 2000 and OIL/TIRS sensors was used as the input data for 2017. Landsat MSS and Landsat 5 TM data were provided by the US Geological Survey (USGS) and Landsat 7 ETM + and Landsat 8 OLI/TIR data were provided by the Remote Sensing Technology and Data Center, LAPAN. Image and ancillary data pre-processing Satellite image pre-processing was performed to convert the digital number (DN) values to reflectance values. The standard products Landsat MSS, 5 TM and 7 ETM + provide data in 8-bit unsigned integer format while Landsat 8 OLI/TIRS provides 16-bit unsigned integer data. Processing into standard reflectance is required to address these differences in value formats in satellite images. The first stage of the conversion process converts DN values to radian values using the formula presented in Equation 1. The second stage converts radian values to reflectance values, as in Equation 2. This second stage of the process refers to Chavez (1988) where is spectral radiance at the sensor's aperture; is quantized calibrated pixel value; is spectral radiance scaled to ; is spectral radiance scaled to ; is minimum quantized calibrated pixel value and is maximum quantized calibrated pixel value. where is the land-surface reflectance for Landsat images; is satellite radiance; is path radiance; is Earth-to-Sun distance in astronomical units; is mean solar exo-atmospheric irradiance and s is the solar zenith angle. The dynamics of coastal shoreline mapping and change analysis Shoreline mapping and change analysis are important in providing input data for coastal hazard assessment . Several studies have been conducted with various approaches for shoreline extraction using integration of remote sensing data and GIS techniques, including visual interpretation, spectral value index, multi-source data analysis, automatic change detection, change vector analysis and others (Kalliola, 2004;Shalaby and Tateishi, 2007;Ekercin, 2007;Erkkilä and Marfai et al., 2007;Zhao et al., 2008;Kuleli et al., 2011). In this study, an approach combining spectral value index and visual interpretation of Landsat images was used and proposed to indicate the separation of land and water bodies, for shoreline extraction. The normalized difference water index (NDWI) can be used as a spectral value index approach for differentiating land and water bodies, as used by Memon et al. (2015). According to Gao (1996), the NDWI is one of the indicators sensitive to the change in water content and can be used to detect objects related to water bodies, as derived from near-infrared (NIR) and short-wave infrared (SWIR) channels of remote sensing data. In this study, the NDWI value can be derived and used for the Landsat 5 TM images for 1988, 2007 and 2011, for Landsat 7 ETM+ for 2000 and for Landsat 8 OLI/TIR images for 2017. NDWI can also be derived from NIR and green channels in remote sensing data (McFeeters, 1996). In this study, the NDWI value can be used for the Landsat MSS images for 1978. In detail, the equation for obtaining NDWI values based on remote sensing channels refers to previous studies conducted by Gao (1996) 4) where is the normalized difference water index; is the near-infrared channel remote sensing data; is the short-wave infrared channel remote sensing data; and is the green channel remote sensing data. Determination of the threshold value for NDWI is required for this study to separate land and water bodies. According to McFeeters (1996), threshold value needs to be applied to the NDWI so as to eliminate those land areas or non-water surfaces which have low reflectance value. We used the isolation approach for the purest water pixels accomplished by using one or more conditional statements that would both threshold the NDWI (if needed) and eliminate pixels using the threshold value. The conditional test expression 'Con' with the threshold set for the value masking operation is run using the raster calculator tool in the Math Toolset of Spatial Analyst ESRI software, as shown in Equation 5 (McFeeters, 2013). The results of NDWI processing can then be used as a reference for visual interpretation using on-screen digitation, to distinguish between land and water bodies. If the NDWI value is equal to or greater than 'value masking operation', then the pixel is unchanged, but if it is not, then a value of -10 is assigned and it is carried forward to the next element of the expression. If the pixel is exactly equal to the 'zone maximum value of NDWI' found within the parcel, then a value of one is assigned to the output grid cell; if it is not, a value of zero is assigned (McFeeters, 2013). 5) Furthermore, the analysis of shoreline changes was performed using the Digital Shoreline Analysis System (DSAS) ver. 4.4 software released in July 2017, as used previously by Carrasco et al. (2012), Rio et al. (2013), Hackney et al. (2013), Thébaudeau et al. (2013), Oyedotun (2014). DSAS is a freely available software application that works within the Environmental Systems Research Institute (ESRI) Geographic Information System (ArcGIS) software. This plug-in ArcGIS extension was developed by Thieler et al. (2017) and can be used for calculating shoreline change. DSAS computes rate-of-change statistics for a time series of shoreline vector data and can be used for historical trend analysis. Results In this study, coastal shoreline extraction and mapping were performed based on a combination of NDWI value calculations and visual interpretations of Landsat images. The high NDWI values indicate the high sensitivity of the Landsat image channel for identifying water bodies, as a result of which the boundaries between land and water bodies can be distinguished. Figure 2 shows the NDWI values obtained from Landsat images for 1978 to 2017 used as an indication of the separation of land and water bodies and used for shoreline and river extraction. Table 1 shows the NDWI value statistics used for the masking operation for land and water-body extraction from Landsat images for 1978 to 2017. Furthermore, the separation of land and water bodies and their multitemporal change analysis are presented in Figures 3, 4 and 5. From Table 1 it can be shown that the available Landsat images have variations in statistical value which affect the threshold masking operation values. The use of high-resolution imagery (e.g., Google Earth, SPOT 6/7 image) is required as a reference to ensure the accuracy of the boundaries between land and water. The threshold masking operation used for Landsat images has different values for each year, as presented in Table 1. This reflects the different statistical parameters in each dataset. For example, Landsat images for1978 have threshold masking operation value of 0.34. This means that, based on Equation 5, if the NDWI value is equal to, or greater than 0.34, then the pixel is unchanged, but if it is not, a value of -10 is assigned to it and it is carried forward to the next element of the expression. If the pixel is exactly equal to the maximum value of NDWI zone found within the parcel, then a value of one is assigned to the output grid cell; if it is not, a value of zero is assigned. Table 2 shows the change area and average change results based on the multi-temporal analysis of land and water separation for 1978 to 2017. It can be seen that during the period 1978 to 2017 there was an increase in land area of 106.11 ha, with the average change being 2.72 ha/year. Meanwhile, there was a decrease in the land area of 543.94 ha, with the average change being 13.95 ha/year. The changes in shorelines and the delta have not only been caused by natural factors but also by factors such as the construction of docks and jetties around the mouth of the river (as presented in Figures 3, 4 and 5). In this study, the segments and transects used for DSAS are presented in Figure 6 and the analysis of the rate-change statistics provided by DSAS used in this study are presented in Table 3. The parameter settings consisting of transect spacing and length in this study is 300 m, with cast direction being auto-detected and a default uncertainty level of 6 m. The DSAS analysis has produced 49 transect IDs, 44 of which are offshore and five are onshore. As shown in Table 3, six parameters are used in the DSAS analysis: end point rate (EPR); least median of squares rate (LMS); linear regression rate (LRR); R-squared of linear regression (LR2); standard error of linear regression (LSE) and confidence interval of linear regression (for 90%) (LCI90). The DSAS results for the examples transect_ID = 11 (offshore position or erosion) and transect_ID = 25 (onshore position or sedimentation) are presented in Figure 7. In the examples illustrated in Figure 7 and Table 3, it can be interpreted that transect_ID = 11 has EPR of -2.81 m/year during the period 1978 to 2017 and LMS rate rounded to -2.76 m/year. The linear regression equation for transect_ID = 11 (y = -2.8835x + 5661.7) was determined by plotting the shoreline positions with respect to time (in years) and the slope of the equation describing the line; from this, the LRR is -2.88 m/year and the LCI90 is 0.86. The band of confidence around the reported rate of change is -2.88 ± 0.86. In other words, there is 90% confidence that the true rate of change is between -3.74 m/year and -2.02 m/year, with LR2 of 0.95 and LSE of 8.15 m. Meanwhile, it can be interpreted that transect_ID = 25 has EPR of 0.46 m/year during the period 1978 to 2017 and LMS rounded to 0.26 m/year. The linear regression equation for transect_ID = 25 (y = 0.5419x -1050.9) was determined by plotting the shoreline positions with respect to time (in years) and the slope of the equation describing the line; from this, the rate of LRR is 0.54 m/year and the LCI90 is 0.80. The band of confidence around the reported rate of change is 0.54 ± 0.80. In other words, there is 90% confidence that the true rate of change is between -0.26 m/year and 1.34 m/year, with LR2 of 0.46 and LSE of 7.57 m. Discussion The coastal environment is an area that is related to land and sea. The change dynamics are caused by several physical processes such as tidal inundation, sea level rise, land subsidence, and erosionsedimentation. These processes have an important role to play in the development of landscape and shoreline changes . Urban development in coastal areas, such as the building of sea walls, breakwater, land reclamation, and removal of beach material from the coastline can also cause problems of environmental degradation and increased the vulnerability of coastal areas (Ongkosongo 1980;Mills et al., 2005;. The phenomenon of shoreline change is one of the most frequent problems encountered in the coastal environment and is caused by natural processes that result in dynamic changes in the coastal area. Coastal area change can affect the vulnerability of the coastal environment and its properties, such as shoreline stabilization, flood control, sediment retention, natural protection and others. In this study, the method of integrating remote sensing data and GIS has been widely used to monitor and analyze the dynamics of shoreline change. Using various methods, some researchers such as Ozdarici and Turker (2006), Chalabi et al. (2006), Xiaodong et al. (2006) and others. The analysis of shoreline changes was performed using the DSAS and it can be seen that the pattern of coastline change tends to be dominated by offshore erosion. The impact of coastline changes that occur naturally and also because of the development activities from 1978 to 2017 have contributed to the causes of tidal flooding in the study area. This is also shown in Figure 8 and Figure 9, related to conditions in the field and changes that occur at this time. Some efforts can be made to prevent the worsening of coastline changes that have an impact on tidal flooding, namely:(a) recovery of the natural conditions of the coastal region with the reconstruction and revitalization of natural "barriers" or the mangrove planting, which serve to inhibit, reduce or absorb water due to flood tides, (b) The release of coastal land or estuary as a conservation area, and rearrangement of ponds around the river/coastal estuary, (c) strengthen the wave retaining sandbags embankments that have existed along the coast. Conclusion In this study, remote sensing data integrated with GIS techniques have been successfully used to track and map historically the dynamics of shoreline change. An approach combining spectral value NDWI and visual interpretation of Landsat images was used and proposed to indicate the separation of land and water bodies, for shoreline extraction. The analysis of shoreline changes was performed using the DSAS and it can be seen that the pattern of coastline change tends to be dominated by offshore erosion. The results of this study may also be important as input data for coastal hazard assessment as part of the effort to overcome the problem of flood tides and also as a consideration in studying the vulnerability of the coastal environment. In addition, the results of this study are supported by spatial data information at a mapping scale of 1:25,000. For the next research could be carried out using information from other remote sensing data (SPOT 6/7, Pleiades, Ikonos, Quick bird and others) to support spatial data information at mapping scales of 1:5,000 to 1:10,000.
2019-04-26T13:49:44.990Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "f453c3b0c5e07eb2d0767a84ab169203f4e9784b", "oa_license": "CCBYNC", "oa_url": "https://jdmlm.ub.ac.id/index.php/jdmlm/article/download/540/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bfbba6d0a52cd600716e29163d2345661bf47514", "s2fieldsofstudy": [ "Geography", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
73576565
pes2o/s2orc
v3-fos-license
Studies on the Seasonal Fluctuations in the Proximate Body Composition of Paratelphusa masoniana ( Henderson ) ( Female ) , a Local Freshwater Crab of Jammu Region The freshwater crab, Paratelphusa masoniana was collected for a period of one year to investigate the seasonal fluctuation in the proximate composition. Marked seasonal variation in protein, lipid and moisture were observed during a period of one year to determine their viability in the course of the reproductive cycle. Both protein and lipid content are inversely related to moisture content. Maximum protein (62.15±0.30%; 55.85±0.48) and lipid (5.85±0.46%; 5.49±0.38%) were observed during non-spawning period and minimum during spawning months. The relationship between protein and lipid is however a direct one. On the basis of the investigation, it has been recorded that the local freshwater crab, P. masoniana, is a biannual breeder. INTRODUCTION Crabs, among many other invertebrates, are considered to be important shell fishery products (Gokoglu and Yerlikaya, 2003).Crab meat is considered as a delicacy in many parts of the world but the limited utilization of shellfishes (crab) in our state (J&K) is due to conservative food habits and lack of knowledge about the nutritive value of crabs. P. masoniana, a local freshwater crab of Jammu region, has been identified as a virgin species so far the scientific knowledge on this species is concerned.Nevertheless, before any species is subjected to commercial exploitation, it is important and essential requisite that knowledge about its life cycle, particularly maturation and reproduction (Mourente and Rodriguez, 1991) besides its biochemical nature is inquisitively studied and understood.Further the scope or role of a species in aquaculture practice is primarily determined by its nutritional status, especially the protein value.Keeping these facts in view, present study has been designed to generate the data regarding its nutritional status and the variation in the proximate composition which will help to establish the present species as a potential culturable candidate.When compared to marine decapods crustaceans, little attention has been paid on the biochemical changes in relation to reproductive cycle in decapod crustacean in India, especially on the freshwater crabs.The aim of this study was thus to generate data on the biochemical changes in the muscle of the crab during the course of reproductive cycle. MATERIALS AND METHODS Collection: Crabs after being collected from their natural habitat, at Gho-Manhasan stream, at a distance of about 12 km from University of Jammu, (32°67/Lat N; 74°79/Long E) were brought to the Wet Laboratory, Department of Zoology, University of Jammu.During the present course of study, only adult female crabs (of carapace width 5-6 cm) were selected for the present study and the juveniles were again released into their natural habitat.Soon after catching, crabs were brought to the laboratory; crabs were then dissected for body meat. Proximate body composition: The specimens collected from Gho-Manhasan stream on monthly basis were subjected to analyze proximate body composition.The analysis was performed for a period of one year (July 2010-June 2011).The organic body constituents of each component were determined by standard methods such as total proteins (Lowry et al., 1951); lipid (Folch et al., 1957); moisture and ash (Standard method of AOAC, 1980).The results were expressed on dry weight basis. Statistical analysis: The data was analyzed on personnel computer to calculate correlation by Pearson's correlation method, ANOVA to test the level of significance with the help of Microsoft Excel 2003 and SPSS (12.0 Version, Chicago, USA) and mean compared by using Duncan's multiple range test taking (p<0.05) as level of significance (Duncan, 1955). RESULTS The proximate body composition of the body meat of female P. masoniana was determined over a period of one year and the results so obtained are depicted in Table 1, Fig. 1.During the present course of study it was revealed that the highest values of the protein were obtained in the month of March (62.16±0.30)whereas the lowest values were found in the month of December (46.05±0.82%).The values so recorded were found to differ significantly among various months (p<0.05).The average value of protein content in the body meat of P. masoniana (female) throughout the year was found to be 54.38±0.43%.The perusal of the Table 1 also reveals the increasing trend of protein during the months of March and October showing two peaks respectively. The lipid content in body meat of female crab was observed to have maximum value in the month of September (5.85±0.46%)which was not found to differ significantly (p>0.05) from the values noted during the month of March (5.49±0.38%)and minimum values (3.99±0.32%;4.08±0.40%) in the months of July and December.The average value of lipid content recorded throughout the year was 4.83±0.61%. The moisture content in the body meat of P. masoniana (female) ranged between 84.23±1.60%(maximum) to 78.13±1.45%(minimum) with an annual average to be 80.98±1.78%.Maximum values of moisture content (84.23±1.60%and 82.56±1.40%)were observed in the months of July and December, respectively. The values noted for protein, lipids, moisture and ash were found to have significant differences among various months of the year showing gradual increasing and decreasing trends.Correlating the various parameters among themselves revealed that protein and moisture showed negative correlation (r = -0.84311),lipid and moisture also showed negative correlation (r = -0.49769),protein and lipid showed positive correlation (r = 0.549235) among themselves. DISCUSSION In the present study, it has been observed that the protein content in female crab showed marked seasonal fluctuations in the body meat.The pronounced fall in the protein content in females suggested that it may be mobilised for gonadal development.The same trend was observed by Sriraman (1978) in shrimp, Penaeus merguiensis and in freshwater prawn, M. idae.The protein content remain characteristically low during winter (December and January) and monsoon (July and August) which coincides with their spawning season (i.e., May to July and December to February) when the gonads are in advanced stage of maturity.There is however, an increase in the protein content during spring (February to April) and post monsoon (August to October).The quintessence of high protein content observed during spring and early winters happen to be due to active feeding, optimum temperature regime and optimum availability of food as algal blooms and plankton during this period acquire maxima.Our observations also get strengthened by the previous recordings made by Parveez (2005) from the same stream, which happen to the natural habitat of P. masoniana.He observed that benthic species like chironomous larvae sps and tipula sps were abundant during summer and monsoon (Nelofer, 2003;Sawhney, 2004).In crustaceans, a great amount of energy gets channeled to the gonads during reproduction, which is reflected in the deposition or depletion of nutrients with the advent or departure of reproductive period (Lambert and Dehnel, 1974).Samyal (2007) while investigating Macrobrachium dayanum recorded two peaks in muscles in the months of May and November when the ovaries are in early stage of development with stage I and stage II oocytes.Bakhtiyar (2008) also recorded a fall in the muscle protein in M. dayanum which coincides with their spawning season when gonads were in advanced stage of maturity. In the present study, two peaks in the muscle lipid content were observed in the months of March (5.49±0.38%)and September (5.85±0.46%).Thus, high lipid content was observed in spring and Post monsoon and this could be due to active feeding and optimum availability of food, as algal blooms and plankton also are reported to acquire maxima during this period (Sharma, 2005).There was also a decline in the lipid content during spawning period and this is possibly due to mobilization of lipid as energy source to meet the high energy demands, during the act of ovulation and spawning on one hand and due to low feeding intensity and low availability of food items on the other.Reduction in the amount of lipid content in the muscles for the development and maturation of gonads has been well discussed by Idler and Bitner (1960), Jafri (1968), Diana and Mackay (1979), John and Hameed (1995), Raina (1999), Jonsson and Jonsson (2005), Langer et al. (2008) and Samyal et al. (2011). There is significant variation in the moisture content of P. masoniana (female) throughout the period of investigation.The spawning of P. masoniana usually occurs during winters (December and January) and monsoon (July and August) since they are biannual breeders.It has also been recorded that during these months the moisture content is highest in muscles.Samyal et al. (2011) while investigating M. dayanum recorded that it is also biannual breeder and during these month i.e., May-June and Sep-Oct, the moisture content is highest in muscles.Bakhtiyar (2008) also observed similar trend in M. dayanum and Labeo rohita where the variation in the moisture content were related with the spawning.Moisture content thus showed significant but inverse relationships with lipid and protein contents.This inverse relation might be due to low temperature, low feeding rate and high energy demands to maintain body temperature and to cope up with food scarcity during winter.Similar results were earlier propounded by many authors (Brandes and Dietrich, 1953;Gerking, 1955;Brown, 1957;Suppes et al., 1967;Jonsson and Jonsson, 2005;Nargis, 2006;Langer et al., 2008;Samyal et al., 2011). The Ash or total inorganic content in present study varies on the average from 6.54±0.35% to 12.35±0.72%.Two peaks (12.35±0.46%,9.89±0.73%)were observed in the body meat in the months of March and September.The increase in the ash content in the female crabs corresponds to the post spawning period.Love (1970) also witnessed that there is increase in the ash level during post spawning.Jafri (1968) stated that there is no direct relationship between the ash cycle and feeding or spawning activities of Cirrhina mrigala.Bakhtiyar (2008) while investigating M. dayanum recorded the minima in the months of October and November and maxima in the month of March.Such variability in ash content could be attributed to the utilization of mineral matter for the growth and maturation of ovaries and spawning purposes. Fig. 1 : Fig. 1: Seasonal variation in the proximate composition of body meat of female Paratelphusa masoniana Table 1 : Seasonal variation in the proximate composition of Paratelphusa masoniana (female), a local freshwater crab of Jammu region during different months of the year (July 2010-June 2011) Data presented in the table is the mean of three readings i.e., Mean±S.D. (Annual average is mean of 12 readings i.e., Mean±S.D.); The values having the same superscript in a column do not differ significantly
2018-12-20T17:59:28.016Z
2013-08-05T00:00:00.000
{ "year": 2013, "sha1": "3d58b737761f719a41af96b419db443629c1e7a1", "oa_license": "CCBY", "oa_url": "https://www.maxwellsci.com/announce/AJFST/5-986-990.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ae9a62b369af9246e5e5f635fe9de6d5544f59d8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
241601171
pes2o/s2orc
v3-fos-license
Peptidylarginine Deiminase 2 in Host Immunity: Current Insights and Perspectives Peptidylarginine deiminases (PADs) are a group of enzymes that catalyze post-translational modifications of proteins by converting arginine residues into citrullines. Among the five members of the PAD family, PAD2 and PAD4 are the most frequently studied because of their abundant expression in immune cells. An increasing number of studies have identified PAD2 as an essential factor in the pathogenesis of many diseases. The successes of preclinical research targeting PAD2 highlights the therapeutic potential of PAD2 inhibition, particularly in sepsis and autoimmune diseases. However, the underlying mechanisms by which PAD2 mediates host immunity remain largely unknown. In this review, we will discuss the role of PAD2 in different types of cell death signaling pathways and the related immune disorders contrasted with functions of PAD4, providing novel therapeutic strategies for PAD2-associated pathology. HIGHLIGHTS • Peptidylarginine deiminase (PAD) enzymes catalyze the conversion of arginine residues to citrulline, regulating activity of host immunity. • PAD2 plays an important yet different role in immune cells than its isozyme PAD4. Although PAD4 is previously identified to be the key regulator in the formation of neutrophil extracellular traps (NETosis), PAD2 also takes part in NETosis in the absence of PAD4. • Pad2 deficiency decreases macrophage pyroptosis while Pad4 deficiency increases pyroptosis. • PAD2, differing from the other PAD family members, citrullinates arginine 1810 (Cit1810) in repeat 31 of the carboxyl-terminal domain of the largest subunit of RNA polymerase II, which enables the efficient transcription of highly expressed genes needed for cell cycle progression, metabolism, and cell proliferation. ETosis is a described cell death that results in the release of a complex lattice of chromatin containing DNA, histones, and other associated proteins (16)(17)(18). These extracellular chromatin webs can entrap and kill microbial organisms. Originally, this phenomenon was described in neutrophils, termed NETosis (Neutrophil Extracellular Traps). However, researchers later found that this mechanism also exists in other cell types such as macrophages, eosinophils, and mast cells (19). Thus, some researchers recommend that the mechanism of this cell death be generalized as "ETosis" (20)(21)(22), while others prefer using NETosis or macrophage ETosis (METosis) for the death of specific cell sources. Ca1 and Ca6 are occupied by calcium in both inactivated and activated PAD2. During the activation of PAD2, calcium ions bind to sites Ca3-5. Afterwards, calcium binds to Ca2, which causes conformational changes at the active site. R347 moves out of the active site, and C647 moves in. As such, a pocket-like structure is generated for substrate binding ( Figure 1B). Then, where does the calcium come from for PAD2 activation? A previous study revealed that adenosine triphosphate (ATP)induced PAD2 activation can be dramatically diminished in mast cells cultured in calcium-free media, suggesting that calcium needed for PAD2 activation mainly comes from the extracellular space (25). Zheng et al. also demonstrated that Annexin A5 (ANXA5) can bind to the plasma membrane to facilitate calcium influx and further contribute to PAD2 activation (26). Thus, sufficient extracellular calcium is required for the activation of PAD2. Citrullination can change the net charge and increase the hydrophobicity of proteins, which subsequently alters the structures and functions of the proteins (35)(36)(37)(38)(39). The effects of citrullination are variable and debated. Hojo-Nakashima et al. revealed that PAD2 is beneficial as it catalyzes vimentin citrullination in THP-1 cells (a human monocytic cell line) to promote the differentiation and maturation of macrophages (40). By contrast, vimentin citrullinated by PAD2 is identified as an autoantigen in rheumatoid arthritis (RA), exhibiting the potentially detrimental role of PAD2 (32). Apart from vimentin, a large number of proteins are found to trigger autoimmune responses following PAD2-mediated citrullination (32). Interestingly, dysregulation of PAD2 activity has been implicated in many diseases such as RA (41), multiple sclerosis (MS) (42), and neurodegenerative disorders (43). Moreover, previous studies revealed that PAD2-catalyzed citrullination is an essential process during various modes of immune cell death, such as ETosis (13,14) and pyroptosis (15). These modalities of immune cell death may play a major role in the pathogenesis of sepsis and other inflammatory diseases (44,45). Consequently, it is critical to understand the role of PAD2 in host immunity and related diseases. In the following sections, the mechanisms via which PAD2 mediate cellular processes, regulate immune response, and cause diseases will be reviewed and discussed. Understanding the mechanisms of host immunity regulated by PAD2 may ultimately allow for design of novel therapeutic strategies for a multitude of immune disorders. PAD2 in Macrophages Macrophages are immune cells which exhibit relatively rich PAD2 expression (9). Macrophages play an important role both in innate and adaptive immunity. Phagocytosis and pyroptosis are two major pathways involved in the pathogen clearance by innate immunity (46)(47)(48)(49)(50). Macrophages contribute to adaptive immunity by presenting the antigens of pathogens to T cells (51)(52)(53). PAD2 can affect these immune actions through regulating the differentiation of macrophages (9,40). Macrophages are derived from monocytes in circulation. Interestingly, although PAD2 mRNA can be detected in monocytes, it is not translated into PAD2 proteins until the initiation of differentiation (9). Moreover, a previous study revealed that the levels of PAD2 mRNA and proteins exhibit concomitant increases in THP-1 cells during the differentiation into macrophages (40). Nonetheless, the underlying mechanisms through which PAD2 mediate monocyte differentiation remain elusive. PAD2 also mediates the activation of pyroptosis ( Figure 2), another important signaling pathway associated with antipathogen activities in macrophages (15). Pyroptosis is an inflammatory form of macrophage death induced by infection or chemical stimulation and mediated by Caspase-1 and/or Caspase-11 (54). Prior to the activation of Caspase-1, the stimulating signals are sensed by pattern recognition receptors, including NOD-like receptors and AIM2-like receptors, and initiate the assembly of inflammasomes (55)(56)(57). During the formation of inflammasomes, a quick increase of protein citrullination can be observed in macrophages (15). Specifically, ASC (apoptosis-associated speck-like protein containing a CARD), a critical component of inflammasomes, is also citrullinated. After PAD2 and PAD4 are dually suppressed by Cl-amidine, a pan-PAD inhibitor (58), the citrullination of ASC is reduced (15). Additionally, the activation of NLRP3 inflammasomes is also dampened, which subsequently diminishes macrophage pyroptosis. In agreement with these findings, our most recent experiments revealed that the knockout of Pad2 in macrophages can decrease Caspase-1 mediated pyroptosis induced by Pseudomonas aeruginosa sepsis (PA-sepsis) (59). In contrast, Pad4 depletion in the macrophages can increase Caspase-1 mediated pyroptosis in the mouse model of PA-sepsis (59). Therefore, PAD2-mediated ASC citrullination is probably a significant step during inflammasome assembly, which then regulates the activation of Caspase-1 and pyroptosis. Nonetheless, since little effort has been taken to explore the association between PAD2 and pyroptosis, the underlying mechanisms via which PAD2 affects Caspase-1 activation remain to be elucidated. Aside from pyroptosis, macrophages are also reported to undergo another form of cell death termed METosis ( Figure 2) (13), which describes the release of extracellular trap-like structures from macrophages (20,60). Similar to NETs, Macrophage ETs (METs) are found in response to various microorganisms (61). METs are capable of trapping and immobilizing microbes to assist in microbial clearance (20). Several studies demonstrated that histone hypercitrullination catalyzed by PADs is an essential step during METosis (13,62). Due to the alterations in net charges and structures, hypercitrullinated histones render chromatins more susceptible to decondensation (63). Most prior studies conclude that the process of citrullination is driven by PAD4, but a study by Mohanan et al. identified PAD2 as a major mediator in tumor necrosis factor (TNF)-a induced MET release from Raw264.7 macrophages (13). Therefore, further work is needed to clarify the association between PAD2 and METosis. PAD2 in Neutrophils Overall, PAD2 seems to have minimal effects on neutrophils due to low expression. The distribution of PAD2 and PAD4 are different in neutrophils. Unlike in macrophages, the PAD that is FIGURE 2 | The role of PAD2 in METosis and pyroptosis in macrophages. Pathogens trigger calcium influx into cytoplasm of macrophages. Subsequently, PAD2 is activated due to elevated levels of calcium. Activated PAD2 translocates into the nucleus to induce histone citrullination and chromatin decondensation, leading to METosis. Also, PAD2 mediates pyroptosis via citrullinating ASC. Citrullinated ASC participates in the assembly of inflammasomes which activate Caspase-1. Caspase-1 facilitates the maturation of IL-1b and IL-18 via cleaving their precursors. Meanwhile, Caspase-1 cleaves and activates PFMs which insert into plasma membrane to create pores allowing massive water to flux in. As a result, macrophages swell and rupture to accomplish pyroptosis, releasing mature IL-1b and IL-18. ASC: apoptosis-associated speck-like protein containing a CARD domain; IL, interleukin; METosis, macrophage death with release of macrophage extracellular traps; PAD2, type 2 peptidylarginine deiminase; PFMs, pore forming molecules; PRR, pattern recognition receptor. predominantly expressed in neutrophils is PAD4 (64)(65)(66). PAD4 exists in granules, plasma membrane, and nucleus, while PAD2 is mainly detected in granules (64). Like macrophages, neutrophils can form NETs to defend against microbial infection (65,(67)(68)(69). NETosis also requires PAD-catalyzed histone hypercitrullination, which induces chromatin decondensation (63). In contrast to METosis, the citrullinating process in neutrophils is believed to be entirely mediated by PAD4 (65). However, our recent study found that selective inhibition of PAD2 can significantly decrease the generation of Citrullinated histone H3 (CitH3) in lipopolysaccharide (LPS)stimulated neutrophils (70). The result suggests that PAD2 may also play a role in citrullinating actions within neutrophils. Furthermore, the extracellular release of PAD2 from neutrophils may still be able to citrullinate histone H3 and fibrinogen (64). PAD2 in T Cells There are two major subtypes of T cells which are CD4+ T cells and CD8+ T cells (71). CD8+ T cells directly kill microbeinfected cells or tumor cells (72), while CD4+ T cells usually act indirectly to regulate immune response, thus coined "helper T cells" (Th) (73). The relationship between CD8+ T cells and PAD2 is not well studied, while several studies revealed that PAD2 can modulate the polarization and functions of CD4+ T cells (11,74,75). The expression of PAD2 in naïve CD4+ T cells is much lower than that in memory CD4+ T cells, indicating that PAD2 may have effects on the differentiation of CD4+ T cells (74). Actually, the fate of differentiating CD4+ T cells is decided by two key transcription factors, GATA3 and RORgT (76). PAD2 can directly citrullinate these two transcription factors, which changes their DNA binding ability to modulate gene expression (11). PAD2 inhibition decreases the differentiation of Th17 cells but promotes the differentiation of Th2 cells from naïve CD4+ T cells (11). Reversely, PAD2 overexpression in human peripheral blood mononuclear cells reduces Th2 cell polarization and increases Th17 cell polarization (77). Meanwhile, PAD2 regulates the functions of CD4+ T cells (11). PAD2 deficiency enhances cytokine production in Th2 cells but suppress cytokine generation in Th17 cells. Interestingly, although PAD2 is not associated with Th1 polarization, PAD2 inhibition can impair interferon-g production in Th1 cells (11). In addition to directly altering the functions and polarization of CD4+ T cells, PAD2 can affect T cell activities by citrullinating certain chemotaxins (i.e., CXCL10 and CXCL11) that mediate the chemotaxis of T cells (34). T cells exhibit lower sensitivity to citrullinated CXCL10 and CXCL11. Therefore, fewer T cells will be attracted to inflammation sites, resulting in attenuated inflammatory response. PAD2 in B Cells B cells are a subset of immune cells, which are responsible for antibody production and antigen presentation (78). The expression of PAD2 is low in B cells (79). Nonetheless, PAD2 is probably required for the transition of B cells to plasma cells, as the knockout of Pad2 can cause a significant reduction in bone marrow plasma cells in a mouse model of TNF-a induced arthritis (80). Consequently, IgG produced by plasma cells is also decreased in Pad2 -/mice, which is associated with alleviated severity of TNF-a induced arthritis (80). This may indicate that PAD2 is required for the development of plasma cells. However, given that PAD2citrullinated proteins are antigens for B cells, another explanation may also be established: Pad2 knockout reduces the generation of citrullinated proteins, thus resulting in decreased activation of B cells. Hence, further work is needed to clarify the role of PAD2 in B cells. PAD2 in Other Immune Cells PAD2 can also interact with other cells to modulate immune response. For example, ATP upregulates the expression of Adamts-9, Rab6b, and TNFRII through activation of PAD2 in mast cells, contributing to the pathogenesis of RA (25). PAD2 and PAD4 inhibition by Cl-amidine also hampers functional maturation of dendritic cells induced by toll-like receptor agonists (81). As evidenced, there remains a paucity of studies exploring the interplay between PAD2 and immune cells. Further clarifying the mechanisms by which PAD2-mediated citrullination participates in immune activities can continue to advance the field in the future clinical applications of PAD2 guided therapies. PAD2 IN HOST IMMUNITY The immunomodulatory effects of PAD2 are mostly exerted by citrullinating key proteins involved in the cell signaling pathways. Thus, PAD2 may display different impacts on host immunity under different circumstances, which is determined by the roles of the citrullinated proteins in these pathways. The involvement of PAD2 in autoimmune diseases reflects its proinflammatory activity. The pathogenesis of RA is associated with elevated levels of PAD2-citrullinated proteins in synovial fluid (82). B cells can recognize the citrullinated epitopes and generate autoantibodies against the citrullinated proteins (83)(84)(85). In 70% of patients with RA, elevated levels of anti-citrullinated protein antibodies (ACPA) can be detected (86). After treatment with antirheumatic drugs, ACPA levels in circulation are significantly reduced correlated with decreased severity of RA (87,88). These results suggest that protein citrullination by PAD2 can trigger an intensified inflammatory response in RA patients. Of note, RA patients who develop antibodies against PAD2 tend to suffer from less severe damage in joints and other organs (89). However, PAD2 sometimes exhibits the ability to inhibit inflammatory response. For example, Loos et al. reported that PAD2-mediated citrullination of CXCL10 and CXCL11 can reduce their chemotactic ability and thus result in diminished accumulation of inflammatory cells (34). PAD2 can also citrullinate certain transcription factors to mediate the differentiation of immune cells. The knockout of Pad2 gene in mice can cause a shift in maturation of Th cells, which increases the differentiation of Th2 cells but decreases the differentiation of Th17 cells, rendering the mice susceptible to allergic airway inflammation (11). The close association between PAD2 and host immunity is partly due to the relatively abundant expression in immune cells (74,79). PAD2 functions as an important factor not only in the differentiation of immune cells, but also in several cell death signaling pathways (13,15,90,91). Although PAD4 is identified to be the key regulator in NETosis (63,92,93), PAD2 may also play a part in the process as NETosis can still occur in the absence of PAD4 (94). Another type of cell death, pyroptosis, which mostly takes place in macrophages, is found to be regulated by PAD2 and PAD4 (15). Additionally, overexpression of PAD2 in Jurkat cells, which are derived from human T lymphocyte cells, can trigger enhanced apoptosis (91). Collectively, these findings indicate that PAD2 has an intimate relationship with immune cells and host immunity. Sepsis Sepsis is characterized by a dysregulated inflammatory response that may result in multi-organ failure (95). The role of PADs in sepsis has been identified in some previous studies (70,(96)(97)(98). However, most of them explored the association between PAD4 and sepsis. This was probably due to the critical effects of PAD4 on NETosis, which is believed to be an important signaling pathway involved in the pathogenesis of sepsis (99). The application of pan-PAD inhibitors, which inhibit the activity of both PAD2 and PAD4, can remarkably improve the survival in mouse models of LPS-induced endotoxemia and cecal ligation and puncture (CLP)-induced sepsis (100)(101)(102). Nonetheless, when Pad4 -/mice were used to explore the individual effects of PAD4 on sepsis, researchers found that Pad4-deficiency did not improve survival nor ameliorate bacteremia (94,98). Accordingly, we revealed that a selective PAD4 inhibitor does not affect survival in LPS-induced endotoxic shock (70). Therefore, we began to hypothesize that the protective effects were derived from PAD2 inhibition. As expected, the employment of a selective PAD2 inhibitor in the same model of LPS-induced endotoxic shock significantly increased survival (70). Thereafter, our studies further demonstrated that the knockout of Pad2 can improve survival in CLP-induced sepsis and PA-sepsis (59,103). Therefore, it can be inferred that PAD2 likely acts as a critical mediator in the pathogenesis of sepsis. Given the minimal effect of PAD4 on sepsis, it raises questions as to why NETosis is closely related to sepsis and why septic animals can benefit from anti-NET therapies. The pathogenicity of NETs is derived from numerous components such as myeloperoxidase, DNA, and Citrullinated histone H3 (CitH3) (104). Such components are also found in extracellular traps released by other immune cells, such as METs, which are more likely to be mediated by PAD2, as PAD2 is more abundantly expressed in macrophages than PAD4 (61). "Anti-NET therapies" are referred to as the clearance of extracellular DNA or CitH3 (105-107), which also eliminate detrimental molecules from other sources including METs at the same time. In contrast, the knockout of PAD4 can only decrease the molecules coming from NETosis. This possibly explains why PAD4 inhibition is not protective during sepsis. In addition, we discovered that selective inhibition of PAD2 can decrease the release of CitH3 in neutrophils (Figure 3) (70). Furthermore, antibody neutralization of circulation CitH3, by a commercially available anti-CitH3 antibody, was not shown to attenuate endotoxemia (105). However, administration of the antibody recognizing CitH3 generated from both PAD2 and PAD4 significantly improved survival (105). These findings display the differing effects of PAD2 and PAD4 inhibition during sepsis. In a mouse model of CLP-induced lethal sepsis, we have newly demonstrated that PAD2 protein is elevated in serum and lung tissue after CLP (103). In septic patients, serum concentrations of PAD2 are positively correlated to lactate (r=0.5, p=0.04) and procalcitonin (PCT) levels (r=0.67, p=0.003) (108). Since lactate and PCT are considered markers for the prognosis and the severity of sepsis (109,110), elevated PAD2 levels in serum may also serve as a future clinical biomarker and predictor of outcomes. Circulating CitH3 was also found to be positively correlated with blood PAD2 (r values=0.0452, p<0.001) and PAD4 levels (r value=0.363, p<0.01), respectively (108). The levels of PAD2 in bronchoalveolar lavage fluid (BALF) from patients with sepsis and respiratory distress syndrome (ARDS) are also significantly increased compared with those in a healthy control group (108). Furthermore, the Pad2 gene was found to be over-expressed in cells of the BALF of patients with septic specific ARDS. The consistent findings support the possible usage of PAD2 as a biomarker for sepsis specific ARDS and may serve as a distinguishing factor between sepsis specific ARDS and other non-infectious causes of ARDS. PAD2 can mediate the onset of sepsis by directly regulating pyroptosis. We recently found that PA-sepsis induced pyroptosis in macrophages is dramatically decreased in the absence of PAD2, thereby attenuating acute lung injury and improving survival (59). In the murine CLP-sepsis model, Pad2 depletion enhances bacterial clearance, attenuates sepsis-induced vascular permeability of lung and kidney, and improves survival (103). Moreover, we found that macrophages stimulated by LPS undergo diminished Caspase-11-dependent pyroptosis in the absence of PAD2, which can explain how Pad2 knockout improves the outcomes of septic mice ( Figure 3) (103). These findings have highlighted the detrimental role of PAD2mediated pyroptosis in the pathogenesis of sepsis ( Figure 3). PAD2 also catalyzes the generation of CitH3 which is recognized as a "danger" signaling molecule (70,111,112). Furthermore, it has been reported that "danger" signaling molecules (i.e., ATP and double strand DNA) can elicit the activation of pyroptosis via the Caspase-1 dependent pathway (113,114). Based on this data, we hypothesize that CitH3 may play a role in activating the pyroptotic pathway and that PAD2 can also modulate pyroptosis in an indirect way. Altogether, PAD2 has the potential as both a biomarker and therapeutic target of sepsis. Although we have demonstrated the effects of PAD2 activation on sepsis, the mechanisms by which PAD2 activation leads to these downstream effects in sepsis remain poorly understood. A previous study demonstrated that ATP induces PAD2 activity via P2X7 receptors (25). While ATP is required for almost all biological reactions as the universal energy source (115), once host cells are damaged, stressed, or infected by pathogens, intracellular ATP can be released to become extracellular ATP which serves as a key "danger" signaling molecule (116)(117)(118). Additionally, certain pathogens can also produce and secrete extracellular ATP (119,120). The extracellular ATP may then bind to P2X7 receptors to induce calcium influx, leading to subsequent PAD2 activation (25). Nonetheless, there is limited evidence supporting that ATP release is responsible for the activation of PAD2 during infections. Thus, further work is required to elucidate the association between infection and PAD2 activity. Rheumatoid Arthritis The manifestations of RA are characterized by chronic synovitis, systemic inflammation, and the generation of ACPA and rheumatoid factors (121). ACPA recognizes and binds to PAD2/4-citrullinated proteins, including vimentin, keratin, enolase, fibrinogen, and filaggrin (32,122). ACPA can serve as a useful biomarker with high sensitivity and specificity, and is often a predictor of poor prognosis (123)(124)(125)(126). Among all the citrullinated proteins associated with RA, vimentin is the most frequently studied. Vimentin is an intermediate filament protein that plays a significant role in fixing the position of cytosolic organelles (127). Macrophages, which also express vimentin, are found in high levels in synovial fluid aspirates of RA joints (128). During calcium ionophore-induced macrophage apoptosis, vimentin is found to be citrullinated by PADs (90). Given the low expression of PAD4 in macrophages, PAD2 is likely the predominant PAD in the citrullination of vimentin. The cleavage of vimentin also occurs in the presence of calcium during macrophage pyroptosis (129). Since PAD2 is a calcium-dependent enzyme, it can be inferred that vimentin possibly undergoes citrullination prior to macrophage pyroptotic death. However, the mechanisms which citrullinated vimentin is associated with the pathogenesis of RA are not clear. One explanation is that the host loses its self-tolerance to citrullinated vimentin due to hereditary factors, which leads to production of ACPA (122,130). As a result, massive ACPA-citrullinated vimentin complexes deposit in the joints, causing activation of complement systems leading to prolonged inflammation (131). Although genetic factors are closely related to the incidence of RA, the effects of environmental factors cannot be neglected (132). For example, a number of RA cases were found to be linked with infection (133). Thus, it is possible that infectious agent-induced macrophage death may be the initial step of RA onset. During the death processes of macrophages, vimentin is citrullinated by PAD2 and released. Meanwhile, more macrophages and other immune cells are attracted to the infected sites due to chemotaxis. Thereafter, citrullinated FIGURE 3 | The detrimental effects of PAD2-mediated pyroptosis and ETosis during sepsis. PAD2 facilitates the activation of Caspase-11, a key regulator in noncanonical pyroptosis, and causes macrophage death. In addition, PAD2 can translocate into the nuclei of neutrophils or macrophages and citrullinate histone H3 to induce ETosis. CitH3 generated during this process may further activate the canonical pyroptotic pathway as a danger signal. aCaspase-1/11, activated Caspase-1/ 11; CitH3, Citrullinated histone H3; ETosis, cell death with release of extracellular traps (ETs); H3, Histone H3; HMGB1, high mobility group box 1; IL, interleukin; M/ NETosis, neutrophil/macrophage death with release of extracellular traps; MPO, myeloperoxidase; NE, neutrophil elastase. Lines, pathways already known; Dotted lines, proposed hypothesis for the pathway to elucidated. vimentin is recognized as an autoantigen which triggers the generation of ACPA. However, this hypothesis cannot explain the pathogenesis of ACPA-negative RA. Therefore, further work is required to understand the complexity of RA. PAD2 can be detected in synovial fluid from RA patients (134). It was demonstrated that the major sources of PAD2 are inflammatory cells (8). RA patients with higher PAD2 levels in synovial fluid tend to have enhanced disease activity, suggesting that the level of PAD2 in synovial fluid is a potential prognostic indicator (135). Additionally, PAD2 can also be taken as an autoantigen by the host. RA patients who developed autoantibodies against PAD2 are likely to display attenuated joint inflammation and RA-related lung disease (89). M1 macrophages, which are activated by the classical pathway, can secrete proinflammatory cytokines such as TNFa and IL-1 and cause joint erosion. While M2 macrophages, which are activated by the alternative pathway, can produce antiinflammatory cytokines (mainly IL-10 and TGF-b), contributing to vasculogenesis and tissue remodeling and repair, as recently observed in systemic sclerosis. Markers for both macrophage phenotypes may coexist on the same cell (136,137). Recent studies have revealed that M1/M2 macrophage imbalance strongly contributes to osteoclastogenesis of RA (138). Eghbalzadeh et al. reported that NETs support macrophage polarization toward an M2 phenotype, displaying antiinflammatory properties. PAD4 deficiency aggravates acute inflammation and increases tissue damage post-acute myocardial infarction, partially due to the lack of NETs (139). It remains largely unknown whether PAD2 affects macrophage polarization. Multiple Sclerosis MS is an autoimmune disorder in central nervous system characterized by chronic demyelination of nerve cells (140). Patients with MS usually suffer from loss of sensitivity, changes in sensation, difficulties in coordination or problems with vision (141). The effects of PAD2 on the pathogenesis of MS remain in debate. Researchers revealed that citrullination of myelin basic proteins (MBP) is increased in MS patients (42,142,143). Overexpression of PAD2 in mice leads to MBP hypercitrullination and myelin loss in central nervous system (144). Hypercitrullination will not only decrease the stability of MBP, but also put MBP at higher risks of being attacked by T cells (28,75). Th17 cells, a subtype of T cell, shows enhanced reactivity to citrullinated MBP (75). As mentioned above, PAD2 can facilitate the polarization of CD4+ T cells into Th17 cells (11). Thus, PAD2 plays a critical role in MS pathogenesis. In line with these findings, a study demonstrated that PAD2 inhibition can attenuate disease severity in animal models mimicking MS (145). On the contrary, a study reported that deletion of Pad2 gene in mice decreased levels of citrullinated MBP but did not reduce the incidence rate of experimental autoimmune encephalomyelitis (146). A recent study discovered that PAD2mediated citrullination is indispensable during the differentiation and myelination of oligodendrocytes (147). Knockout of Pad2 in mice will result in motor dysfunction and even decreased myelination in axons (147). Therefore, it is critical to keep a balanced PAD2 level in central nervous system as it maintains the normal structure and functions of nerve cells and further studies should continue to elucidate the role of PAD2 on MS. Cancers Currently, PAD2 is implicated in skin tumors (148), breast cancer (10), colorectal cancer (149), and glioblastoma multiforme (150,151). The intimate relationship between PAD2 and tumors is likely due to the role of PAD2 in modulating gene transcription. PAD2 is the only PAD that citrullinates arginine1810 (Cit1810) in repeat 31 of the carboxyl-terminal domain (CTD) of the largest subunit of RNA polymerase II (RNAP2) (152). Cit1810 is crucial for RNAP2 to overcome the pausing barrier close to the transcription start site, which enables the efficient transcription of highly expressed genes needed for cell cycle progression, metabolism, and cell proliferation (152). The effects of PAD2 on the development of different tumors are not the same. For example, overexpression of PAD2 has been shown to augment the malignancy of skin tumors (153), while increased PAD2 expression has been linked to improved survival in patients with estrogen receptor (ER)-positive breast cancer (112). However, upregulated PAD2 expression in breast cancer is associated with resistance to tamoxifen treatment (154). These findings make PAD2 a mysterious modulator in tumorigenesis. In the pathogenesis of breast cancer and glioblastoma multiforme, PAD2 modulates gene transcription via citrullinating histones (112,151). In colorectal cancer, however, PAD2 prevents tumor progression by citrullinating b-catenin thus inhibiting the Wnt signaling pathway. PAD2 inhibition will increase the sensitivity of breast cancer cells to tamoxifen (154), while the knockout of Pad2 will induce great resistance to nitazoxanide in colorectal cancer cells (149). More work is needed to investigate the involvement of PAD2 in other tumors (149). The therapeutic potential and applications of PAD2 in cancer remains to be further clarified. However, given the role of PAD2 in tumorigenesis and response to chemotherapy, PAD2 will continue to be a biomarker and target of continued interest in the era of personalized cancer care. CONCLUSIONS AND FUTURE PERSPECTIVES Citrullination is a posttranslational protein modification catalyzed by PADs and is involved in host immunity. PAD2 has wide-reaching roles through its citrullination of a variety of target proteins. Dysregulated activity of PAD2 is associated with a series of immune disorders including sepsis, RA, MS, and tumor formation (Figure 4). In this review, we have summarized PAD2 specific functions on cell death control, transcription regulation by citrullination of arginine 26 on histone H3 (e.g., sepsis, tumor), and citrullination of vimentin (e.g., RA). We highlight several citrullinated proteins to demonstrate the contributions of PAD2-mediated protein citrullination to RA, sepsis, and cancer within each specific environment. Given that CitH3 is also found to be a biomarker in patients with cancers (155,156), more epigenetic studies are needed to explore if and how citrullination of histone H3 interferes with transcription factors to regulate RA, sepsis, and cancers. We propose that PAD2 is a promising novel biomarker and therapeutic target for a broad spectrum of diseases including autoimmune and inflammatory diseases, sepsis, MS, and several types of cancer. AUTHOR CONTRIBUTIONS ZW and YL drafted the manuscript. PL, YT, WO, JH, HA, and YL made significant revisions to the manuscript. All authors contributed to the article and approved the submitted version. FUNDING This work was funded by grants from the National Institute of Health R01 (Grant# RHL155116A) to YL and HA, and the Jointof-Institute (Grant# U068874) to YL.
2021-11-04T13:21:10.473Z
2021-11-04T00:00:00.000
{ "year": 2021, "sha1": "84239afd4288d39571d783a15e84bf6696844480", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.761946/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84239afd4288d39571d783a15e84bf6696844480", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15026892
pes2o/s2orc
v3-fos-license
Protective Effects of Edaravone in Adult Rats with Surgery and Lipopolysaccharide Administration-Induced Cognitive Function Impairment Postoperative cognitive dysfunction (POCD) is a clinical syndrome characterized by cognitive declines in patients after surgery. Previous studies have suggested that surgery contributed to such impairment. It has been proven that neuroinflammation may exacerbate surgery-induced cognitive impairment in aged rats. The free radical scavenger edaravone has high blood brain barrier permeability, and was demonstrated to effectively remove free radicals from the brain and alleviate the development of POCD in patients undergoing carotid endarterectomy, suggesting its potential role in preventing POCD. For this reason, this study was designed to determine whether edaravone is protective against POCD through its inhibitory effects on inflammatory cytokines and oxidative stress. First, Sprague Dawley adult male rats were administered 3 mg/kg edaravone intraperitoneally after undergoing a unilateral nephrectomy combined with lipopolysaccharide injection. Second, behavioral parameters related to cognitive function were recorded by fear conditioning and Morris Water Maze tests. Last, superoxide dismutase activities and malondialdehyde levels were measured in the hippocampi and prefrontal cortex on postoperative days 3 and 7, and microglial (Iba1) activation, p-Akt and p-mTOR protein expression, and synaptic function (synapsin 1) were also examined 3 and 7 days after surgery. Rats that underwent surgery plus lipopolysaccharide administration showed significant impairments in spatial and working memory, accompanied by significant reductions in hippocampal-dependent and independent fear responses. All impairments were attenuated by treatment with edaravone. Moreover, an abnormal decrease in superoxide dismutase activation, abnormal increase in malondialdehyde levels, significant increase in microglial reactivity, downregulation of p-Akt and p-mTOR protein expression, and a statistically significant decrease in synapsin-1 were observed in the hippocampi and prefrontal cortices of rats at different time points after surgery. All mentioned abnormal changes were totally or partially reversed by edaravone. To our knowledge, few reports have shown greater protective effects of edaravone on POCD induced by surgery plus lipopolysaccharide administration from its anti-oxidative stress and anti-inflammatory effects, as well as maintenance of Akt/mTOR signal pathway activation; these might be closely related to the therapeutic effects of edaravone. Our research demonstrates the potential use of edaravone in the treatment of POCD. surgery. All mentioned abnormal changes were totally or partially reversed by edaravone. To our knowledge, few reports have shown greater protective effects of edaravone on POCD induced by surgery plus lipopolysaccharide administration from its anti-oxidative stress and anti-inflammatory effects, as well as maintenance of Akt/mTOR signal pathway activation; these might be closely related to the therapeutic effects of edaravone. Our research demonstrates the potential use of edaravone in the treatment of POCD. Introduction Postoperative cognitive dysfunction (POCD) refers to varying degrees of cognitive function decline in patients after surgery. It covers a wide range of cognitive functions including working memory, long term memory, information processing, attention, and cognitive flexibility [1,2]. POCD adversely affects quality of life, social dependence, and mortality [3]. Oxidative stress, surgery, general anesthesia/anesthetics, and neuroinflammation are believed to increase the risk of POCD [4][5][6]. Certain tissues can be damaged as a result of oxidative stress, especially during an operation [7]. The free radical scavenger edaravone, which crosses the blood brain barrier, can effectively remove free radicals from the brain [8]. Evidence has shown that oxidative factors were harmful to cognitive function [9][10]. However, edaravone can improve the cholinergic system and protect neurons from oxidative toxicity, alleviate Alzheimer's disease-type pathologies, and cognitive deficits [11,12]. Other studies demonstrated that edaravone inhibited the progression of cerebral infarction and ischemia [13,14]. Most importantly, the effects of edaravone on the development of POCD have been proven in patients undergoing carotid endarterectomy [15] In short, previous studies suggest that edaravone might improve cognitive impairment in patients after surgery by scavenging free radicals. Lipopolysaccharide (LPS) is a major bacterial TLR4 ligand that activates the immune response to infections [16]. Recent reports have demonstrated that surgery accompanied by LPS treatment triggered more severe neurodegeneration in adult rats [17]; The interaction between oxygen free radicals and inflammatory factors would exacerbate postoperative cognitive dysfunction [18,19].They both would destroy cell membrane function, break the balance of homeostasis, cause oxidative phosphorylation in a mess [20]. The normal activation of the Akt/ mTOR signal pathway was the phosphorylation [21]. a subsequent increase in activated microglial cells and inhibition of activation of the Akt/mTOR signal pathway were also observed, finally leading to declines in learning and memory [22,23]. Also, mTOR was involved in regulating synaptic plasticity, which affected the function of memory and cognitive [24,25]. Based on previous reports, we hypothesized that in a rat model of surgery associated with LPS administration, edaravone might improve POCD by alleviating oxidative toxicity, inhibiting microglial activation, and maintaining normal function of activation of the Akt/mTOR signal pathway. The results obtained in this study may provide new insights into the potential roles of edaravone in the treatment of POCD, as well as its mechanisms of action. SCXK (JING) 2012-0001). All rats were housed under controlled conditions with a 12-h light/ dark cycle and ad libitum access to food and water for 7 days before the experiment. The procedures on animal experimentation were approved by the Animal Care Committee of the Chinese People's Liberation Army General Hospital (Beijing, China). The maintenance and handling of the rats were consistent with the guidelines of the National Institutes of Health, and adequate measures were taken to minimize animal discomfort. The rats were divided into four groups randomly (20 rats per group): the control plus placebo group (C-P), control plus edaravone group (C-E), surgery plus placebo group (S-P), and surgery plus edaravone group (S-E). Each group was divided into two subgroups randomly (10 rats per group): the 3-day postoperative group and 7-day postoperative group. The C-P and S-P groups received a placebo (0.3 mL of saline by intraperitoneal [i.p.] injection), and the C-E and S-E groups received 3 mg/kg of edaravone (Cat: 80-131003, Simcere, Nanjing, China) in 0.3 mL of saline by i.p. injection. Surgical Procedures After undergoing the Morris Water Maze (MWM) test and fear conditioning training for 5 consecutive days, animals in the S-P and S-E groups underwent LPS administration of 100 μg/ kg i.p. (Sigma, St. Louis, MO, USA). The dosage of LPS was determined according to a previous report [17]. After 1 h, the LPS-treated groups underwent a left nephrectomy under pentobarbital sodium anesthesia (1% and 40 mg/kg) (Fig 1A). A longitudinal incision was made in the back where the wounds were not accessible to the rats to avoid self-inflicted bite trauma. We considered this surgery model to mimic a standardized organ removal in humans with subclinical infection [8]. During the operation, the rats' body temperature were maintained at 36.5°C to 37.5°C. All rats received 50μl of 0.2% ropivacaine subcutaneously for the post-operative analgesia. Rats were allowed to recover in an incubator at 37°C and were then returned to their cages. Later, the C-P and S-P groups received saline (i.p.), whereas the C-E and S-E groups received edaravone (i.p.) each day until days 3 and 7 after surgery, respectively( Fig 1B). Each rat would be weighted every day after operation, all rats were put on weight with days. We sterilized the wound of rats at the day 1,2,3,5,7 after surgery. Then, the animals were sacrificed with the lethal dose of pentobarbital sodium i.p. at 3 days and 7 days postoperatively, and the brains of five rats in each group were immediately removed and fixed in 4% paraformaldehyde for 48 h for histological analysis. The hippocampi and prefrontal cortices of the other rats were rapidly dissected, removed, and stored at -80°C until analysis. Behavior tests The MWM test (EthoVision, The Netherlands) was performed to assess spatial learning, spatial memory, and cognitive flexibility in the rats [26]. The water maze consisted of a round container (180 cm × 60 cm) made of black plastic and filled with water (25 ± 1°C). The pool was placed in a room with several visual cues for orientation in the maze. The maze was divided into four quadrants: the first, second, third, and fourth quadrants. An invisible platform (10 cm × 10 cm) was placed 1 cm below the water surface in the first quadrant (target quadrant). All rats underwent repeated training for 5 consecutive days. Every day, they were released successively into the water facing the wall of the pool from the first quadrant to the fourth quadrant. The rats were trained to find the hidden platform and climb onto it within 60 s. The animals were allowed to stay on the platform for at least 10 s after each trial. When the rats failed to reach the escape platform within 60 s, they were gently guided towards the platform and left there for 10 s. After the completion of four trials, the rat was dried with a towel and returned to its cage. The animals' movements were recorded with a video camera. On postoperative day 3, probe tests were conducted on all the treated groups by removing the platform and releasing the rats in the third quadrant (opposite to the first quadrant). Latency, the number of crossings over the former location of the platform, and time spent in each quadrant were measured in a single 60-s trial. Then, working memory was tested; both the platform and rat were randomly placed in novel positions to assess trial-dependent learning and working memory [27]. Animals underwent one more training session to ensure that all rats learned the new platform location. After 15 s, each rat was released from the same location as in the above training; the rat would swim a shorter path to the platform in the second trial if it recalled the first trial. The escape latency to the platform in the second trial was taken as measure of temporary or working memory. All of the 7-day postoperative groups underwent the same trials on postoperative day 7. 2.3.2 Fear conditioning. Fear conditioning is used to detect associative learning and memory function [28]. Different groups of rats were trained for fear conditioning 1 day before the operation. Rats were subjected to an inescapable electric foot shock provided via the grid floor of a testing chamber. The chamber in which training occurred was lit with fluorescent bulbs. The total training time was 330 s for each rat. Each animal was allowed to explore the chamber for 60 s before the presentation of 3 tone-foot shock pairings (tone: 2000 Hz, 85 dB, 30 s; foot shock: 0.9 mA, 2 s) with an intertribal interval of 60 s. Then, the animal was removed from the test chamber 60 s after conditioning training. Different groups underwent the context test and tone test on postoperative days 3 and 7, respectively. The rats were tested in the context and tone test. Each animal was placed into the chamber for 330 s either in a context test (without a tone or shock) or a tone test (without a shock). Episodes of freezing were recorded by a digital camera. These tests assessed hippocampi-dependent (context-related) and hippocampi-independent (tone-related) learning and memory functions [29]. They were expressed as the percentage of freezing time using software analysis. Biochemical analysis 2.4.1 Malondialdehyde (MDA). MDA is one of the lipid peroxides. The concentration of MDA indicates how severely tissue is attacked by free radicals. This method is based on thibabituric acid (TBA). The color reaction was measured at 532 nm. The levels of MDA in the hippocampi and prefrontal cortices of rats were measured using commercial assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's instructions. 2.4.2 Superoxide dismutase (SOD) activity. The method was based on the ability of SOD to inhibit the superoxide anion free radical O 2 -. The color reaction was measured at 550 nm. The SOD activity of tissue was also measured using commercial assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Immunofluorescence staining A cerebral block containing the hippocampi and prefrontal cortex was fixed in 10% neutralbuffered formalin overnight and then embedded in paraffin. Coronal 10-μm sections were prepared and subjected to immunofluorescence staining. First, paraffin sections were dewaxed and placed in EDTA buffer (pH 8.0) to repair antigens. Second, sections were washed in 0.01% Triton X-100 in phosphate-buffered saline (PBS-T) and blocked with 3% bovine serum albumin (BSA) for 30 min at room temperature. Then, they were incubated overnight at 4°C in appropriate primary antibodies: anti-Iba1 (1:100; WAKO) and anti-synapsin-1 (1:100; Cell Signaling). Next, the sections were incubated with the appropriate secondary antibodies including anti-rabbit IgG (1:400; Jackson) and anti-mouse IgG (1:400, Jackson) for 2 h at room temperature. The number of positively stained microglial cells was counted by fluorescence microscopy at 400× magnification and the mean density of the synapses was also calculated by fluorescence microscopy at 400× magnification. Western blot The hippocampal and prefrontal cortical tissues were homogenated in RIPA buffer (50 mmol/ L Tris-HCl, pH 6.8, 150 mmol/L NaCl, 5 mmol/L EDTA, 0.5% sodium deoxycholate, 0.5% NP-40, and supplemented with a cocktail containing protease and phosphatase inhibitors). The total lysates were centrifuged at 12000 rpm for 30 min at 4°C. Protein concentrations were determined by a BCA Protein Assay reagent kit (Pierce, Rockford, IL, USA). Equal amounts of the sample (30 μg of protein) were separated by SDS-PAGE and analyzed by Western blot using the following primary antibodies: rabbit polyclonal anti-Akt and anti-p-Akt (1:1000, Cell Signaling), rabbit polyclonal anti-p-mTOR (1:1000, Cell Signaling), and mouse monoclonal anti-β-actin polyclonal antibody (1:3000; Abcam). Appropriate secondary antibodies were used. Each experiment was repeated no less than four times. Relative expression was normalized to β-actin. Statistical analysis All data were analyzed by an observer who was blinded to the experimental protocol. Statistical calculations were performed using SPSS 16.0 (SPSS Science, Inc., Chicago, IL, USA). We analyzed multiple group means by a two-way analysis of variance followed by Dunnett's post hoc test wherever appropriate. Values of p < 0.05 were considered significant. Edaravone attenuated unilateral nephrectomy plus LPS administration-induced learning and memory impairment Previous work has demonstrated that a nephrectomy plus an LPS injection could lead to POCD [17]. Therefore, the protective effects of edaravone on POCD were examined in this model. As shown in Fig 2A, in the MWM test, the escape latency in all groups was significantly shorter during the last training session when compared to the first training session (p < 0.001), yet no difference was observed between the groups, indicating that all animals were able to learn where the platform was located. On postoperative day 3, the well time in the target quadrant in the first MWM probe trial in the S-P group was decreased notably compared to the other groups (p < 0.05), and the number of crossings also showed a decreasing tendency, although it did not reach significance (Fig 2B and 2C). In the working memory test, the escape latency needed to reach the new platform was increased obviously (p < 0.05) in the S-P group compared to the C-P and S-E groups (Fig 2D). During the probe test, there were no significant difference in swimming speed between the groups, suggesting that the poorer performance of the S-P group was not a result of reduced motor ability (Fig 2E). On postoperative day 7, there was no statistical difference between the S-P group and other groups in dwelling time in the target quadrant, number of crossings, or escape latency, although rats in the S-P group presented a decreasing tendency in dwelling time in the target quadrant and an increasing tendency in escape latency. In the fear conditioning test, hippocampal-dependent memory was assessed in a novel context and revealed highly significant impairment in the S-P group when compared to the C-P group on postoperative days 3 (p < 0.01) and 7 (p < 0.05) (Fig 3A and 3B). Compared to the C-P group, the freezing time in the S-P group was significantly decreased (p < 0.01). This decrease was reversed obviously in the S-E group (p < 0.05/0.01), indicating the protective effects of edaravone on the development of POCD. During the tone-related fear conditioning test (hippocampal-independent memory) on postoperative day 3, as shown in Fig 3C, the freezing time percentage was notably decreased in the S-P group when compared to the C-P group (p < 0.05); this decrease was significantly prevented by edaravone (p < 0.05). On postoperative day 7, freezing responses to the tone were not significantly different between any of the groups (Fig 3D). Edaravone increased SOD activities and reduced MDA levels in the hippocampi and prefrontal cortex in rats after surgery plus LPS administration As demonstrated in Fig 4A and 4B, compared to the C-P group, the SOD activities of the hippocampi and prefrontal cortex were significantly decreased on postoperative day 3 (p < 0.01/ 0.001), but showed no change on postoperative day 7 in the S-P group; this abnormal decrease in SOD activities was largely prevented by edaravone (p < 0.05). Likewise, edaravone significantly attenuated abnormally increased MDA levels in the hippocampi of the S-P group 3 days after the operation (p < 0.01) (Fig 4C). No difference was observed between the groups regarding MDA level in the prefrontal cortex on postoperative day 3 (Fig 4D) or day 7, although the MDA level in the S-P group also showed an increasing tendency without a statistical difference. Edaravone prevented microglial activation after surgery plus LPS administration Using immunofluorescence, the effects of edaravone on ionized calcium binding adapter molecule 1 (Iba1) were investigated. As shown in Fig 5A-5X, the total counted number of Iba1-positive cells on hippocampal (Fig 5Y; p < 0.001) and prefrontal cortical (Fig 5Z; p < 0.01) sections in the S-P group was much higher than in the C-P group and S-E group on postoperative day 3, yet there were no significant difference among the treated groups on postoperative day 7. Edaravone attenuated surgery plus LPS administration-induced neuroinflammation To further investigate the mechanism of edaravone in preventing microglial activation, Akt/ mTOR signal pathway-related protein expression was tested by western blot. As shown in Fig 6A-6F, on day 3 after the operation, protein levels of p-Akt and p-mTOR in the rats' hippocampi and prefrontal cortices were largely decreased in the S-P group compared to the C-P group (p < 0.05/0.01); this abnormal decrease was significantly attenuated (p < 0.05) by edaravone. On postoperative day 7, no difference in protein expression was observed between any of the groups. Edaravone improved surgery plus LPS administration-induced synaptic function depression To further evaluate the protective effects of edaravone on surgery plus LPS administrationinduced cognitive function impairment, the synaptic protein SYN was examined. On postoperative day 3, a significant reduction in SYN intensity was observed in hippocampi from group S-P (p < 0.001) (Fig 7A-7L), and this reduction was partially reversed (p < 0.01) by edaravone (Fig 7Y). On postoperative day 7, the SYN intensities in the hippocampi showed no difference. Different from the hippocampi, the expression of SYN in the prefrontal cortex was not different between any of the groups on postoperative day 3 (Fig 7M-7X and 7Z) or day 7. Discussion This paper shows that surgery plus LPS injection can induce POCD in rats, and that the resulting cognitive impairment can be largely prevented by edaravone. Moreover, the protective effects of edaravone on the development of POCD in rats may be related to its antioxidant effects, inhibiting microglial activation, and maintaining normal activation of the Akt/mTOR signal pathway. Recent studies revealed that surgery can lead to cognitive decline by triggering systemic and hippocampal inflammation [5,30,31]. Systemic infection increases the levels of pro-inflammatory cytokines in the brain that contribute to subsequent impairment of the consolidation of memory in rats [32]. LPS, the major component of the outer membrane of Gram-negative bacteria, is known to trigger a powerful immune response [16]. Priming the immune system with a subclinical dose of LPS can amplify the pro-inflammatory response caused by surgery [33]. In clinical practice, it is very common for patients to have sub-clinical infection before or after an operation [17]. For this reason, based on the reported studies, we chose the dosage of LPS (100 μg/kg) to mimic sub-clinical infection. The chosen dose has been tested and has the ability to sensitize the immune system and augment the severity of unilateral nephrectomy-induced impairment of cognition [17]. The MWM test was chosen as a robust and reliable test that is strongly correlated with hippocampal-dependent memory [34][35]. It consists of two parts: the spatial reference memory test and reversal test. In the spatial reference memory test, obviously inherent memory impairment was observed in the S-P group, and this inherent memory injury was significantly alleviated by edaravone. In the MWM reversal task, a method was used to evaluate cognitive flexibility, which is independent of hippocampal function [36]. The obvious reduction in learning ability and short-term memory were shown in the S-P group, and this cognitive impairment after the operation was also prevented by edaravone. In the novel context test of fear conditioning, hippocampal-dependent cognitive dysfunction was sustained on postoperative day 7, whereas hippocampal-independent cognitive decline occurred after postoperative doi:10.1371/journal.pone.0153708.g007 day 3, but did not last to postoperative day 7. Edaravone administration also prevented cognitive decline and accelerated cognitive recovery in the fear conditioning test. In the context fear conditioning test, the cognitive dysfunction was sustained on postoperative day 7, while the spatial reference memory in the MWM test on postoperative day 7 was not changed in surgery plus LPS group. It maybe related to rats form different memory with different regions of hippocampi, and the damage degree of hippocampal regions which surgery plus LPS induced was different. Although spatial memory and contextual fear memory were hippocampal-dependent, the formation of memory depended on different brain regions [37].The spatial memory rely on hippocampi, corpus striatum, basal forebrain, cerebellum and other regions participation, any damage of above tissues will induce memory impairment [38]. The impairment of dorsal hippocampi was more serious than the impairment of ventral hippocampi for spatial memory decline [39].Fear conditioning test formed cortex memory, it relied mainly on CA1 region of hippocampi [40]. Especially, the activity of RA1 was associated with the fear cortex memory [41]. Previous studies have shown that cognitive impairment was obvious in water maze and fear conditioning tests in unilateral nephrectomy-treated aged rodents [42,43]. Meanwhile, systemic inflammation is believed to increase the levels of pro-inflammatory cytokines in the brain and aggravate POCD [17,32]. Edaravone, a known antioxidant, has been demonstrated to antagonize POCD in patients [15]. However, to our knowledge, few studies have examined the protective effect of edaravone in POCD induced by surgery plus LPS injection. Our study is the first to demonstrate the potential role of edaravone in the treatment of cognitive impairment caused by surgery plus LPS injection. Previous studies have indicated that surgery contributed to the inflammatory response and oxidative stress by activating the immune system [44,45], and systemic infection would result in more inflammatory cytokines in the brain [32]. Both inflammation and oxygen free radicals were believed to take part in the onset and maintenance of POCD [46,47]. Moreover, inflammation also promoted the entrance of oxygen free radicals into the central nervous system [29] and then exacerbated the injurious effects of oxidative stress on cognitive function [18]. For these reasons, the antioxidant and anti-neuroinflammation effects of edaravone were further investigated in rats that underwent surgery plus an LPS injection. Abnormal changes in the activities of SOD and the levels of MDA in brain tissues were thought to relate to dysfunction and damage to the structure of the cell membranes, mitochondria, and lysosomes, as well as cell autolysis related to POCD [46]. In addition, the overexpression of inflammatory cytokines was often accompanied by an increased number of activated microglial cells [48,49], which were characterized by an acute increase in Iba1. In this paper, decreased activities of SOD and increased levels of MDA, as well as a significant increase in Iba1, were shown at different time points after the operation (days 3 and 7 for SOD and MDA, and day 3 for Iba1) in the hippocampi and prefrontal cortices of S-P group animals. All the above-mentioned abnormal changes were partially reversed by edaravone, further suggesting that the protective effects of edaravone on POCD might be related to its antioxidant and antineuroinflammation effects. In addition to attenuating oxidative stress and neuroinflammation, maintaining the activation of the Akt/mTOR signal pathway to prevent POCD induced by surgery by inhibiting inflammation was thought to be a reliable method [50]. The reason was that the Akt/mTOR signal pathway has been shown to play a crucial role in the induction of key anti-inflammatory and immunomodulatory cytokines [50,51]. In addition, the activation of the Akt/mTOR signal pathway could be inhibited by oxidative stress [52,53]. Most importantly, known drugs with greater protective effects against POCD, such as acetylcholinesterase, were found to have the ability to activate the Akt/mTOR pathway [54]. In order to investigate the relationship between the protective effects of edaravone on POCD and activation of the Akt/mTOR pathway, the protein expressions of p-Akt and p-mTOR, as well as SYN intensity, were also tested. In general, p-Akt participates in regulating cell apoptosis, stimulating cell proliferation, and many other physiological processes [27]. Inflammatory factors such as TNF-a, IL-6, and oxidative factors can inhibit the activation of the Akt/mTOR signal pathway via downregulating the expression of p-Akt protein [55,56]. mTOR, the main downstream signaling factor in the Akt/ mTOR signal pathway, was proven to have a close correlation with cognitive dysfunction such as in Alzheimer's disease [57]. Moreover, it has also been demonstrated to partially influence synaptic plasticity and memory [24,25] through regulating the synthesis of certain proteinassociated with reshaping of the synapse [58,59]. Synaptic plasticity was proven to be the biological basis for maintaining learning and memory under normal conditions [60], and SYN-1 intensity was regarded to be involved in regulating the number of synaptic vesicles and contributed to the synaptic function. In the S-P group, the downregulation of expressions of p-Akt and p-mTOR proteins, accompanied by a reduction in SYN intensity in the hippocampi and prefrontal cortex, was observed in the rats; these effects were largely reversed by edaravone, indicating that edaravone could also maintain normal activation of the Akt/mTOR signal pathway by preventing the downregulation of p-Akt and p-mTOR proteins. As a result, neuroinflammation caused by surgery was largely inhibited and synaptic plasticity was maintained, which finally led to the significant attenuation of POCD induced by an operation plus LPS injection. Conclusions In summary, obvious cognitive impairment was shown in rats that underwent a unilateral nephrectomy plus LPS administration. The known antioxidant edaravone could effectively attenuate cognitive impairment; its protective mechanism may be related to its antioxidant and anti-inflammatory effects, as well as its ability to maintain activation of the Akt/mTOR signaling pathway. Although the details of how edaravone improves cognitive function are not yet clear, this paper may provide a new strategy to counter POCD caused by operations. Supporting Information S1 File. Table A. Average escape latency(s) in the spatial learning of the MWM. Table B. MWM test index on day 3 after surgery. Table C. Fear conditioning test index. Table D. SOD activity (U/mg protein) and MDA concentration (nmol/mg protein) on postoperative day 3.
2016-05-12T22:15:10.714Z
2016-04-26T00:00:00.000
{ "year": 2016, "sha1": "7ab98cc0590249bc9105ef2334c25f748ba4854e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0153708&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ab98cc0590249bc9105ef2334c25f748ba4854e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213368969
pes2o/s2orc
v3-fos-license
The oxidation properties of MgF2 particles in hydrofluorocarbon/air atmosphere at high temperatures The high temperature oxidation properties of magnesium fluoride powders in the atmosphere of air containing hydrofluorocarbon (HFC-134a) were characterized by EDS, SEM, XRD, and gravimetric analyses. The results showed that the oxidation property of magnesium fluoride powder was related to the concentration of HFC-134a, temperature and reaction time. With the concentration of HFC-134a decreasing, the temperature rising and the reaction time prolonging, the extent of magnesium fluoride oxidation increased. This result is great significance for further study of the protective theory of fluorine-containing gases on Mg and its alloys. Introduction As one of the lightest structural metals, Mg alloys are used in the automotive and aerospace industries. However, their applications are still limited due to oxidation and combustion during processing [1]. In order to prevent the oxidizing and burning, sulfur hexafluoride (SF6) is generally used as a protection gas during the processing of Mg alloys. In recent years, SF6 has been forbidden owing to its strong greenhouse effect [2]. Therefore, it is urgent and necessary to look for possibilities to replace the SF6 during the processing. Hydrofluorocarbon (CF3CH2F, HFC-134a) has been recently introduced to protect the melting process of Mg alloys [3]. Further studies show that when Mg alloys are smelted and processed by HFC-134a/air protection, a composite protective film of MgF2 and MgO is formed on the surface of the melt. MgF2 is the key component in the membrane , which is of great importance for melt protection, and its content could reflect the protection influence of the compound gas [4]. In generally, MgF2 is stable and would not decompose under normal circumstances. However, some studies have confirmed the fact that MgF2 can be transformed into MgO in air at high temperatures [5][6][7][8]. Then, will this oxidation reaction occur in HFC-134a/air atmosphere at high temperature? If so, what factors will affect the reaction process? So far, these problems are still unknown, and related research has not been reported. Therefore, the oxidation properties of MgF2 in the high temperature atmosphere of HFC-134a/air at were researched. In order to provide theoretical elements for further research of protective mechanism of fluorine-containing gases on magnesium and its alloys. Experiments In this study the feed stocks were high purity of MgF2 particles and its chemical composition (wt%) is 0.002 SO4 2+ 2 gases used in this study were the mixture of hydrofluorocarbon (HFC-134a>99.9%, water<0.001%) and air. The oxidation of MgF2 powders was studied by means of a high-temperature test method. The tests were conducted in a tubular furnace in an HFC-134a/air atmosphere. The experimental apparatus includes air generating pump, a Φ 50 mm×1000 mm quartz tube in a tubular furnace with temperature control device. In this work, air and HFC-134a were mixed in the setted proportion and then continuously fed into the quartz tube at 300 mL/min. After purging the tube inside was using the compound gas for 1 h, about 2.5 g of MgF2 sample, which had sieved a 200 mesh screen sieve, was positioned in the crucible, and then heated to the desired temperament at a rate of 8 ~ 10 °C/min and hold in the atmosphere for a certain time. And then, the sample was refrigerated to room temperature, and deposited in a desiccator for the, energy dispersive spectrometer and scanning electron microscope analyses. The phase composition of the oxidized MgF2 specimens was tested by a Rigaku Ultima IV X-ray diffractometer with Cu-Kα source operated at 40 kV and 40 mA. The morphology and elemental composition of the oxidized MgF2 examples were investigated by Quanta FEG 250 FESEM and EDAX Genesis APEX EDS respectively. Figure 1 presented the X-ray diffraction spectrum of MgF2 specimens after oxidation in the compound gas of atmosphere with diverse concentrations of HFC-134a at 1000 o C for 2 hours. There was no obvious MgO transformed in the mixture gases containing 1 % and 0.5 % HFP-134a. In air with HFC-134a concentration under 0.1% the peaks of MgO appeared, also their intensities increased as the concentration of HFP-134a went down. It indicates that at 1000 °C, MgF2 was not oxidized in air with relative high concentration of HFC-134a. While in air with relative low concentration of HFC-134a, MgF2 was oxidized, and with the concentration of HFC-134a decreasing, the oxidation degree increased gradually. The morphologies were scanned by scanning electron microscope for MgF2 samples after exposure to air containing concentrations of 1% ~ 0.01% HFC-134a at 1000 °C were shown in Figure 2. There in air containing 1% and 0.5% HFC-134a, MgF2 particles were larger. There were a small amount of white granules on the samples surfaces in the atmosphere containing 0.1% HFC-134a, which were identified as MgO by energy dispersive spectrometer spot scanning. With the reduction of HFC-134a gas concentration from 0.1% to 0.01%, white granules increased. These results suggest that in air with Table 1 lists the EDS data results of oxidized MgF2 samples in the atmosphere of air containing diverse concentrations of HFC-134a at 1000 °C for 2 hours. It clearly shows that when the concentration of HFC was 1% and 0.5%, the content of oxygen element was low and did little variation. As HFC-134a concentration fell to 0.1%, 0.05% and 0.01 %, the content of oxygen element was obviously improved and elevated with the reduction of HFC-134a concentration. This demonstrates that MgF2 underwent a weak oxidation in the atmospheres of air containing 1 % and 0.5 % HFC-134a. With the decrease of HFC-134a concentration, the oxidation extent of MgF2 elevated. The weight loss of oxidized MgF2 samples in different concentrations of HFC-134a/air atmosphere at 1000 °C for 2 hours is shown Figure 3. As shown in Figure 3, with the reduction of the does not decompose and evaporate within 1000 °C , and the oxidation process of MgF2 into MgO will result in weight loss (the molar mass of MgF2 is 62.3g mol -1 , MgO is 40.3g mol -1 ). Therefore, the weight loss of oxidized MgF2 samples improved with the decrease of the concentration of HFC-134a means that the oxidation degree of MgF2 improved with the decrease of HFC-134a concentration. The outcome is corresponding with the outcome of energy dispersive spectrometer analysis above. Figure 4a presents the XRD analysis results of oxidized MgF2 samples at diverse temperatures in air containing 0.01% HFc-134a for 2 hours. At temperatures of 600 and 700 °C, there is only MgF2 peak and no MgO peak in the spectrum. At 800 ~ 1000 °C, the MgO peak appeared, and as the temperature increased, the height of MgO peak increased stage by stage. The XRD analysis results of MgF2 samples after oxidation at different temperatures in air containing 0.1% HFC-134a for 2 h has been shown in Figure 4b. It can be seen that when the temperature was 600~850 °C , there was only the peaks of MgF2 in the spectra and no MgO peak. Between 900 and 1000 °C , the MgO peaks appeared, and the intensity of MgO peak increased gradually with the increase of temperature. This result is similar to that of MgF2 in the mixing atmosphere of 0.01% HFC-134a/air. It is indicated that temperature is also an important factor affecting the oxidation behaviour of MgF2 in HFC-134a/air atmosphere, and the oxidation degree of MgF2 increases with the increase of temperature. The EDS analysis results of MgF2 samples after oxidation at different temperatures in air containing 0.01% HFP for 2 h have been shown in Table 2. According to Table 2, at 800 and 850 °C, O element content was very low and it increased slightly with increasing temperature. At 900, 950 and 1000 °C, the content of O element increased obviously with the increase of temperature. These results are consistent with the results of XRD analysis above. Figure 5 shows the weight loss of MgF2 samples after oxidation in the air at different temperatures for 2h. As shown in Figure 5, the weight loss of MgF2 samples increased with the increase of temperature. This further indicates that the temperature has a considerable influence on the high temperature oxidation character of MgF2 in HFC-134a/air atmosphere. Figure 5. Weight loss of MgF2 samples exposed to 0.1% HFC-134a/air at different temperatures for 2hours. Influences of reaction time on MgF2 oxidation The XRD analysis results of oxidized MgF2 samples at diverse times in air containing 0.1% HFC-134a at 1000 °C was showed by Figure 6. There was no MgO peaks at 1 h, while from 2 h to 5 h, the height of MgO peaks was slightly increased. This indicates that as the reaction time increased from 1 h to 5 h, the content of MgO increased and MgF2 content decreased. Table 3 presents EDS analysis results of oxidized MgF2 samples at diverse times in air containing 0.1% HFC-134a at 1000 °C. According to the form, when the reaction time was 1 h, the content of oxygen element was small. As the reaction time increased from 2 hours to 5 hours, the content of oxygen element distinctly increased, which was matching with the corresponding XRD analysis results. Conclusions The oxidation of MgF2 particles in HFC-134a/air atmosphere at high temperatures was studied. It was found that the oxidation of MgF2 in the atmosphere was mainly related to the concentration of HFC-134a, temperature and reaction time. With the decrease of HFC-134a concentration, the increase of temperature and the prolongation of reaction time, the degree of oxidation of MgF2 increased. The results can provide a theoretical basis for the study of the protection mechanism of HFC-134a gas on magnesium and its alloy melt.
2019-12-12T10:51:06.310Z
2019-12-06T00:00:00.000
{ "year": 2019, "sha1": "bcec4feb1155dbd02f7b21b8c05fc14a5c7efede", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/688/3/033086", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1d626ba94b0bc338bde6408e7098a5ce93939372", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
214681416
pes2o/s2orc
v3-fos-license
Impact of etonogestrel implant use on T-cell and cytokine profiles in the female genital tract and blood Background While prior epidemiologic studies have suggested that injectable progestin-based contraceptive depot medroxyprogesterone acetate (DMPA) use may increase a woman’s risk of acquiring HIV, recent data have suggested that DMPA users may be at a similar risk for HIV acquisition as users of the copper intrauterine device and levonorgestrel implant. Use of the etonogestrel Implant (Eng-Implant) is increasing but there are currently no studies evaluating its effect on HIV acquisition risk. Objective Evaluate the potential effect of the Eng-Implant use on HIV acquisition risk by analyzing HIV target cells and cytokine profiles in the lower genital tract and blood of adult premenopausal HIV-negative women using the Eng-Implant. Methods We prospectively obtained paired cervicovaginal lavage (CVL) and blood samples at 4 study visits over 16 weeks from women between ages 18–45, with normal menses (22–35 day intervals), HIV uninfected with no recent hormonal contraceptive or copper intrauterine device (IUD) use, no clinical signs of a sexually transmitted infection at enrollment and who were medically eligible to initiate Eng-Implant. Participants attended pre-Eng-Implant study visits (week -2, week 0) with the Eng-Implant inserted at the end of the week 0 study visit and returned for study visits at weeks 12 and 14. Genital tract leukocytes (enriched from CVL) and peripheral blood mononuclear cells (PBMC) from the study visits were evaluated for markers of activation (CD38, HLA-DR), retention (CD103) and trafficking (CCR7) on HIV target cells (CCR5+CD4+ T cells) using multicolor flow cytometry. Cytokines and chemokines in the CVL supernatant and blood plasma were measured in a Luminex assay. We estimated and compared study endpoints among the samples collected before and after contraception initiation with repeated-measures analyses using linear mixed models. Results Fifteen of 18 women who received an Eng-Implant completed all 4 study visits. The percentage of CD4+ T cells in CVL was not increased after implant placement but the percentage of CD4+ T cells expressing the HIV co-receptor CCR5 did increase after implant placement (p = 0.02). In addition, the percentage of central memory CD4+ T-cells (CCR7+) in CVL increased after implant placement (p = 0.004). The percentage of CVL CD4+, CCR5+ HIV target cells expressing activation markers after implant placement was either reduced (HLA-DR+, p = 0.01) or unchanged (CD38+, p = 0.45). Most CVL cytokine and chemokine concentrations were not significantly different after implant placement except for a higher level of the soluble lymphocyte activation marker (sCD40L; p = 0.04) and lower levels of IL12p70 (p = 0.02) and G-CSF (p<0.001). In systemic blood, none of the changes noted in CVL after implant placement occurred except for decreases in the percentage CD4 T-cells expressing HLA-DR+ T cells (p = 0.006) and G-CSF (p = 0.02). Conclusions Eng-Implant use was associated with a moderate increase in the availability of HIV target cells in the genital tract, however the percentage of these cells that were activated did not increase and there were minimal shifts in the overall immune environment. Given the mixed nature of these findings, it is unclear if these implant-induced changes alter HIV risk. Introduction As approximately 40% of all pregnancies worldwide are unintended [1], prevention of unintended pregnancy is a public health priority. Over 200 million women worldwide use progestin-containing contraception (HC) either alone or in combination with estrogen to achieve their family planning goals [2]. Some research suggests that HC may contribute to the spread of HIV by increasing susceptibility to infection [3]. The greatest concern had been with Depot Medroxyprogesterone acetate (DMPA) where a meta-analysis of nine studies estimated a significant increase in HIV risk of 40% with DMPA compared to non-HC use [3]. Notably, these findings have not been consistently demonstrated by all studies, and all the studies were observational thus prone to potential confounding. Further, the findings have been challenged by the results from the Evidence for Contraceptive Options and HIV outcomes (ECHO) study [4], a large randomized trial that found no significant increase in HIV risk among users of DMPA compared to copper intrauterine device and levonorgestrel implant users. While these data are reassuring, gaps remain in our understanding of the relative HIV risk with contraceptive use compared to non-use and the impact of other progestin-containing contraceptive methods. Notably, data on the etonogestrel implant (Eng-Implant), are limited despite increasing rates of global use of these contraceptive implants. There are several potential non-contraceptive effects from the use of potent steroid hormones. High levels of estrogen and progesterone during pregnancy are associated with a shift from a TH1 to a TH2-dominant immune profile, dampening the pro-inflammatory pathways, and increasing susceptibility to certain disease conditions (e.g., influenza, malaria, listeria) while reducing the severity of others (e.g., multiple sclerosis, rheumatoid arthritis) [5][6][7][8][9]. This natural phenomenon lends credence to the scientific premise of immune changes with hormone concentrations. HIV risk could thus be amplified by an increased representation of cells expressing HIV co-receptors within the female genital tract or trafficking of HIV target cells to the genital mucosa. CD4+ T-cells expressing the cell surface receptor C-C chemokine receptor type 5 (CCR5, the primary HIV co-receptor) are among the first cells to be infected and subsequently the virus can spread to regional lymph nodes [10]. Within the lower genital tract mucosa, the number and type of cellular targets, primarily CD4+ T-cells expressing CCR5, predict susceptibility to HIV infection [11,12]. The functional properties exhibited by CD4 + T-cells influence susceptibility to HIV infection, specifically the expression of activationassociated molecules (markers such as HLADR, CD38) have been associated with increased risk of HIV acquisition [13][14][15][16]. Prior studies have evaluated the immune effects of DMPA on these cellular markers, with some, but not all, noting increases in key HIV target cell markers [17][18][19][20][21][22][23]. No studies have explored the effect of the Eng-Implant on these key cellular markers of HIV risk nor evaluated larger epidemiologic data to explore any cohorts to evaluate for an association between Eng-Implant use and HIV acquisition. Given this gap, we aimed to prospectively examine the effect of Eng-Implant initiation on the systemic and lower genital tract mucosal immune environment, with a focus on HIV target cells. The Eng-Implant is a long-acting highly effective progestin-only contraceptive method containing a 3 rd generation progestin. Although Eng-Implant has less glucocorticoid activity compared to medroxyprogesterone in DMPA and less endogenous estrogen inhibition, we hypothesize that given the sustained progestin exposure over time, there will still be some immunologic changes within the genital tract to suggest increased susceptibility with use. Study population and recruitment This was a prospective study to evaluate the effect of three months of Eng-Implant use on HIV target cells and inflammatory markers in the lower genital tract and systemic circulation. This manuscript is the first to evaluate one of the primary study objectives of a larger cohort study of three contraceptive methods registered at clinicaltrials.gov, NCT02357368. Women recruited into the larger cohort could initiate the Eng-Implant, the Levonorgestrel Intrauterine device or DMPA based on their preference. For this analysis, we will focus on the results from all individuals selecting the Eng-Implant, as the differences in baseline characteristics and relatively small sample size limit our power to make comparisons between methods. Women interested in initiating a new contraceptive method were recruited from the metro-Atlanta area via community-based postings or local referral from clinics. We enrolled eligible women between ages 18-45, who experienced normal menses (22-35 day intervals) for at least three cycles, had an intact uterus and cervix, and were HIV uninfected (determined by point-of-care rapid test using Ora-Quick 1 ). Participants could not have used HC or copper intrauterine device (IUD) in the previous 6 months, had any signs of an STI on clinical examination at time of enrollment and needed to be medically eligible to initiate their selected contraceptive method (for this analysis Eng-Implant) based on CDC medical eligibility criteria for contraceptive use and clinical judgment. Approval for this study was obtained from the Emory IRB and Grady Research Oversight Committee prior to study initiation. Written informed consent was obtained from all participants and all laboratory researchers and technicians were blinded to contraceptive exposure. Study procedures/clinical visits The primary exposure of interest was the Eng-Implant (Nexplanon 1 , Merck & Co, Inc) [24]. We scheduled four study visits for each participant, two visits prior to contraceptive initiation and two visits approximately three months after Eng-Implant administration. Study visits were scheduled with the goal of pre-contraceptive sample collection at both the luteal (visit 1target of 21 days after last menstrual period with window of 17 days to onset of next menses) and the follicular (visit 2-target three days after completion of menses, with window up to 14 days after onset of menses) phases of the menstrual cycle, based on a self-reporting of date of the last menstrual period. The Eng-Implant was placed at completion of visit 2. Post-contraceptive sampling collection occurred two weeks apart approximately three months after contraceptive initiation: visit 3 with target 12 weeks after (window 11-14 weeks after contraceptive initiation) and visit 4 with target 14 days after visit 3 (window of 12 to 16 days). The a priori goal was to compare the baseline (with follicular and luteal variation accounted for) with the post contraception (with 2 visits to account for variations over a 2 week period where endogenous hormonal changes may occur) results. We requested participants to abstain from vaginal intercourse for 24 hours prior to each visit to minimize the risk of contamination of genital tract samples by semen. Specimen collection During a speculum examination, we collected a cervicovaginal swab for sexually transmitted infections (DrySwab™, Lakewood Biochemical Company). This was followed by a cervicovaginal lavage (CVL) collection with a lavage from the cervix, vaginal walls and posterior fornix with 10 ml of phosphate-buffered saline (PBS) for approximately 60 seconds as per the protocol described by the Microbicide Trials Network (https://vimeo.com/224957115/00cb72fed6) with details previously described [25]. To enhance cellular yield, CVL was performed twice. We collected blood in 8 mL sodium citrate-containing CPT tubes (BD Biosciences). CVL allows enrichment of target cells positioned at the apical lumen in proximity to exposure with a lower risk of tissue trauma from sampling that would cause bleeding and contamination of phenotyping. CVL or luminal cells are not imbedded within the tissue, persist within a harsh environment, and have a reduced cell yield compared with other sampling approaches, but CVL provides an accurate means of tissue resident phenotyping at the site of sexually transmitted exposure. In experiments where luminal T cells are analyzed separately from T cells embedded in the tissue, these two populations have been shown to be very similar phenotypically and functionally. [26-31] Microscopically, it has been shown that luminal T cells remain closely associated with the apical face of the epithelium. [32][33][34][35][36]Several studies have shown these luminal T cells are viable, capable of recognizing and responding to antigen, and play a critical role in immunity at mucosal sites [28,35,[37][38][39][40][41]. Luminal T cells are sufficient to provide significant protection even when T cells located in the underlying tissues are not present [41], thus although you may not find a large number of T cells in CVL, these cells can be critical for barrier protection. With our methodology we have high viability of the cells (70-90%) and although cell count numbers for leukocytes are low, these counts are within the range of other sampling methodologies [25,42]. Covariate assessment The following covariates were measured at each visit. 1) Semen presence in CVL: we detected semen presence using the Abacus ABAcard p30 test to detect prostate specific antigen (PSA). 2) Presence of sexually transmitted infections (STIs): we collected a cervicovaginal swab for STIs (DrySwab™, Lakewood Biochemical Company) at each visit prior to CVL from cervical os. DNA was extracted using the Qiagen DNA Mini Kit and used to amplify targets from Neisseria gonorrhoeae, Chlamydia trachomatis, Mycoplasma genitalium, Trichomoniasis vaginalis, and Herpes simplex virus types 1 and 2, using two real-time duplex PCR assays and Qiagen Rotor-Gene Q real-time PCR instrument. Qiagen Rotor-Gene Q Series software was used to analyze data. These multiplex PCR assays were performed in the Division of Sexually Transmitted Diseases Laboratory Reference and Research Branch at the US Centers for Disease Control and Prevention. 3) Bacterial Vaginosis (BV): we determined the presence of BV by Nugent score criteria [43] from gram stains prepared from CVL. Prior comparison data from our lab between CVL smears and swab among 37 sample pairs were highly correlated (r>0.88, p<0.0001) with categorical interpretations in agreement for all slides. Scores above six are considered consistent with BV. 4) Blood presence in the CVL: we defined blood presence qualitatively with a urine dipstick test detecting > = 8000 RBC/μl. This cut off was selected for inclusion of potential systemic blood contamination, however exploration and use of other cut-off values did not meaningfully alter our study findings. Immune marker assessment Specimens were placed in a cooler with ice immediately after collection and transported to the Division of HIV/AIDS Prevention Laboratory Branch at Centers for Disease Control and Prevention (CDC) within four hours of collection for processing, cellular isolation and characterization. Blood was separated into plasma and peripheral blood mononuclear cells (PBMCs) by centrifugation in CPT tubes as instructed by manufacturer. After collecting blood plasma, PBMCs were collected from the CPT tube and washed with PBS prior to staining. CVL specimens were enriched for leukocytes using Percoll gradient centrifugation as previously described [25]. Plasma and CVL supernatant aliquots were stored at -80˚C until analysis. Cellular characterization was performed at that time on CVL leukocytes and PBMCs via flow cytometry. Viable leukocytes were distinguished using Zombie Fixable Viability Kits (Biolegend) then blocked for non-specific staining with anti-CD16/32 Fc (BioXcell). The primary outcome of interest was the proportion of CD4 cells with CCR5 expression. Secondary outcomes evaluated were: 1) CD4/CD8 T-cell ratio to measure the changes in T-cell homeostasis, 2) the expression of activation markers CD38 and HLA-DR, peripheral tissue retention marker CD103 [44] and trafficking marker CCR7 on CD4 T-cells or CD4 CCR5+ Tcells, 3 . Stained samples were run on an LSRII flow cytometer and acquired using FACS DIVA software (BD Immunocytochemistry Systems, San Jose, CA, USA) and analyzed using FlowJo software (TreeStar, Inc.). Cellular measurements were analyzed as a percentage of CD4 + T cells, or CD4+ CCR5+ T cells expressing a given marker or combination of markers. For accurate measurement of CCR5 expression frequency on CD4 T cells, CCR5 gating was set against matched-naïve CD4 T cells from PBMCs as previously described [45]. Soluble immune mediators from the CVL supernatant and plasma were evaluated using Luminex technology with xPONENT software (Luminex Corporation) with all samples tested in duplicate on a 96 well plate containing seven standards, two quality controls and 39 samples using a customized multi-analyte panel (HCYTOMAG-60K-18 MILLIPLEX Human Cytokine panel, Millipore). The panel contained selected proinflammatory, inhibitory and chemotactic soluble cytokine and chemokines [IL-1b, IL-6, IL-12 (p70), IFN-a2, IFN-g, IL-1a, IL-17, IL-2, TNF-a, IL-4, GM-CSF, G-CSF sCD40, MIP-1a, MIP-1b, IP-10, IL-8, Fractalkine (CX3CL1)]. Using the sigmoid standard curve from the Millipore Analyst 5.1, a regression curve was extrapolated from the raw data individually for each cytokine. For samples below the level for quantification, we used half the lower limit of detection. Statistical methods Visits were dichotomized into pre-implant use (visits 1 and 2) and post-implant use (visits 3 and 4). Any outcome (cytokine, cellular marker) value below the limit of detection was assigned a value of half of the lowest measured value for that outcome. Only samples with greater than 100 viableCD3+ cells extracted were included in the analyses, a similar approach to Lajoie et al [46]. To evaluate for potential associations that may be confounding our interpretation of study findings, we conducted separate logistic mixed models to assess the association of implant status (pre, post) with CVL visit characteristics (semen, STI, blood, BV). Models contain covariate of interest, a random intercept for subject, and variance components variance structure. This statistical approach was selected for evaluation of longitudinal data with repeated measurements of the same patient over time [47]. Separate generalized linear mixed models with a gamma distribution and log link were used to assess the association of each cytokine (IL-1b, IL-6, IL-12 (p70), IFN-a2, IFN-g, IL-1a, IL-17, IL-2, TNF-a, IL-4, GM-CSF, G-CSF sCD40), chemokine (MIP-1a, MIP-1b), chemotactic cytokine (IP-10, IL-8, Fractalkine) and cellular marker (CD4 CCR5, CD4 CD38, CD4 HLA-DR 2, CD4 CD103, CD4 CCR7) outcome with implant use (pre, post). Models included implant use, a random intercept for subject and variance components covariance structure. All models were stratified by tissue type (CVL, blood). CVL models additionally included presence of semen, presence of blood, STI, and BV status as covariates. Model-based estimates and 95% confidence intervals of estimated mean outcome level by implant use were back-transformed (exponentiated) to produce estimated arithmetic means on the original scale. Similarly, the estimated arithmetic mean ratio (AMR) and 95% confidence interval of post-implant use to pre-implant use was produced by exponentiating the coefficient for implant use. Linear mixed regression models were used to assess whether the distribution of lymphocyte memory cells into four mutually exclusive groups (T NA , T CM , T EM , T EMRA ) varied by implant use. Models contained memory cell type, implant use, and memory cell type � implant use interaction term. The Type 3 F test of the interaction term is reported as well as model-based estimates and 95% confidence intervals for memory cell type and implant use. The models included robust variance estimates and compound symmetry covariance structure by subject grouped by memory cell type nested within implant use and were stratified by CD4 and CD8 T-cell phenotypes and tissue type. Model fit was assessed through residual plots. To reduce the potential impact from multiple comparisons on false discovery, we interpreted our results with an adjusted p-value using a Benjamini and Hochberg false discovery rate of 0.1 for each set of analyses within specimen type and cytokine/cellular marker sets. Memory cell type analyses set α = 0.05. Analyses were conducted in SAS v9.4. Results Eighteen women enrolled in the study and completed both pre-contraceptive visits, 16 women completed visit 3 (88.9%) and 15 completed all 4 visits (83.3%). All pre-contraceptive luteal and follicular samples were collected during the appropriate windows described in the study methods, with Visit 3 and Visit 4 conducted at a median of 84 days (Q1: 83, Q3: 86.5 days) and 105 days (Q1: 98, Q3: 111) post-contraceptive initiation. Women were predominately African-American (83%), unmarried (83%), and young (median age 24 years) ( Table 1). CVLs collected from 53 (79% of all visits) visits contained greater than 100 viable CD3+ T-cells and were subsequently included in this analysis. There were no associations between having fewer than 100 CD3+ T-cells on the analysis and visit number (data not shown). STIs were diagnosed by PCR at 28 (42%) visits, and BV diagnosed by Nugent score at 27 (42%) visits (Table 2). There were no significant differences in any of the visit level covariates between before and after the implant placement. In the lower genital tract, we noted a significant increase in the proportion of CD4 cells expressing CCR5 after implant placement compared to measures taken prior to placement [AMR 1.56, 95% CI 1.09-2.24] (Table 3) Additionally, Eng-Implant use significantly changed the distribution of T-cell subtypes among both the CD4 (p = 0.004, Fig 1A) and CD8 T-cells (p = 0.023, Fig 1B), with an observed shift away from effector memory subtype. Among the PBMCs, there were minimal changes in the distribution of T-cell phenotypes, with no noted changes in T-cell ratios or CCR5 co-receptor expression (Table 4). There was a significant increase in CCR7 expression on CCR5+ CD4 T-cells [AMR 1.30, 95% CI 1.08, 1.57] and decreased HLA-DR on CD4 T-cells [AMR 0.81, 95% CI 0.70, 0.90]. These findings remained significant with the adjusted alpha value. There was a significant difference in distribution of memory cell phenotype among the CD4 T-cells (p = 0.014) with an observed shift towards more naïve cells and reduced effector memory cells (Fig 2A and 2B). Overall, lower genital tract cytokine expression was similar before and after implant initiation (Fig 3, S1 Appendix) Discussion The results of our study highlight few changes in the lower genital tract inflammatory environment following Eng-Implant initiation. While several studies have evaluated genital tract immune changes after use of other hormonal contraceptive methods [17][18][19][20][21][22][23][48][49][50][51][52][53][54][55][56][57][58][59][60][61], to our knowledge, no published study has evaluated Eng-Implant. We report an increase in the proportion of CD4 T-cells expressing the co-receptor CCR5 at the genital mucosa with Eng-Implant use that could be associated with increased risk of HIV infection; however, not all our findings relate a clear picture of increased susceptibility. For example, implant placement reduced the frequency of the activation marker HLA-DR, but not CD38, among CD4 T-cells. The clinical significance of these findings on HIV acquisition is unclear. Further, while there were minimal changes in soluble immune markers in the lower genital tract, these changes suggest a slight increase in local immune suppression with reduced concentrations of proinflammatory cytokines, a finding similarly noted with DMPA [54]. As we observed some shifts in T-cell populations in the genital tract associated with Eng-Implant use, we cannot eliminate the potential that Eng-Implant has an effect on HIV acquisition. This finding is important when interpreting the results of the recently conducted ECHO study [4], as they did not find a significant difference in HIV acquisition between DMPA users and users of the copper intrauterine device and the levonorgestrel implant. The ECHO results are encouraging that DMPA did not differ from these other methods in relation to HIV risk. The ECHO study was powered to detect a clinically significant increased risk of 50% and conclusions regarding other methods cannot be made. Furthermore, while we evaluated the etonogestrel and not levonorgestrel implant, we find some changes in immunologic markers with unclear impact on susceptibility. While small changes in individual risk with a contraceptive method use should not alter eligibility for use [62] of a contraceptive method, with increasing global utilization of many of the longer-acting contraceptive methods, it is important for research to identify even subtle differences that may influence counselling for high-risk individuals and have public health importance. The increased expression of CCR5 at the genital mucosa may reflect infiltration of T-cells or a direct effect of the implant on the expression of CCR5 [63]. The increased expression of CD103, coupled with the shift from the canonical effector memory phenotype towards a central or migratory memory subtype, suggests that infiltration and retention are drivers of this shift [25]. The CD4/CD8 T-cell ratio further supports that the Eng-Implant use is influencing the trafficking patterns of immune cells. The increased frequency of CD8 T-cells at the genital mucosa is provocative and clue potential alterations in local inflammation. Prior research suggests that effector CD8 T-cells cannot enter into the vagina without CD4-T cell permission in the form of activation-associated cytokines [33]. While our cytokine findings do not fully support this finding, it is possible that soluble cytokine measurements may not detect this mechanism. A decrease in GCSF was noted in the genital tract after implant initiation. A similar reduction in GCSF was also observed in plasma (although not significant after adjusting for multiple comparisons). This finding of a small yet significant reduction in GCSF may signify an alternative pathway associated with altered HIV susceptibility through damaged mucosa. Granulocyte colony-stimulating factor (GCSF) may induce an inflammatory reaction enhancing neutrophil function. With receptors on granulosa cells, GCSF has been implicated in ovulation and thus could be downregulated in the setting of ovulation inhibition associated with implant use [64]. GCSF is also associated with wound healing and has been associated with faster healing from genital ulcerations [65], GCSF stimulates the proliferation and differentiation of cells that participate in acute and chronic inflammation and immune responses including mature leukocytes, macrophages, and dendritic cells [66]. This potential mechanism of altered immune response should be further explored to determine if clinically significant. The mechanism by which progestin contraceptives may be influencing immune expression in the genital tract is not fully elucidated. Progestins may act via alteration of gene expression after binding to and activating intracellular steroid receptors [63], which vary based on different tissue cell types. Gonadal hormones can regulate the expression of numerous genes involved in multiple cellular functions [67,68] with the effects modified by cell type, presence of other hormones and transcription factors, and their binding potential for progestins by other steroid receptors besides the progesterone receptor can result in agonist or antagonist activity. Further, biological effect can vary based upon the dose of progestin. High progestin levels can cause thickening of cervical mucus that creates a barrier to sperm assent, suppress ovulation and alter the endometrial lining. Progestin-containing contraceptive methods can differ by their mode of delivery, length of effectiveness, global availability, degree of endogenous hormone and ovulation inhibition and type of progestin they contain with varying degrees of estrogenic, androgenic, anti-androgenic, glucocorticoid and anti-mineralocorticoid activity [69,70]. For example, medroxyprogesterone (MPA), a synthetic progestin in DMPA, has potent glucocorticoid (GC) activity compared with weaker GC activity for ENG, where levoneorgestrel (LNG) has no GC activity. Given the differences among different contraceptive methods, it is important to understand the immunologic effect of varying types of contraception to understand their potential and relative impact on reproductive health and immunity. Notably, even among individuals using the same contraceptive, the serum progestin concentrations can vary widely and these differences may influence the effect of the particular contraceptive [63]. The Eng-Implant users may have variability in serum concentrations with implant use as well as tissue level exposure and tissue responsiveness via steroid receptors. Understanding the individual level factors that influence both systemic hormonal concentrations and mucosal level response to the hormone is critical to provide guidance for individual level counseling. While other studies have not evaluated the Eng-Implant, prior studies evaluating the effects of DMPA have conflicting results [17][18][19][20][21][22]. For example, while two studies did not see changes in the vaginal HIV target cells with DMPA use [18,23] one other found significantly higher frequencies of CCR5+ CD4+ T-cells (relative risk: 3.92) compared to non-users [22]. A recent cross sectional analysis comparing 15 DMPA users to 20 non-hormonal contraceptive users found higher levels of activated T-cells and a higher proportion of CD4+CCR5+ T-cells among DMPA users on tissue biopsy samples, however this increase was not noted among the cervical mononuclear cells obtained via cytobrush and cervical spatula [46]. Some of the inconsistencies may be related to the differences in study population, study methodology (cross sectional versus longitudinal), timing of sample collection in relation to luteal or follicular phases or timing in relation to hormonal contraception, sample collection approach or laboratory methodology. Importantly, vaginal immune parameters are influenced by many factors and quite variable within and between individuals. This variability may account for some of the discrepancies in DMPA studies and highlight the need to interpret the results of our study in the context of future research among different study populations and exploring individual level factors that may account for variable responses. A strength of our study design is that we captured two time points over the course of four weeks both before and after implant placement to capture the overall environment given changes over a cycle with endogenous hormonal exposure. Given our small sample size, these results should be interpreted cautiously. While our sample size limits our power to evaluate subtle immunologic changes, the changes that we do identify highlight the need for larger, more robust studies to determine if these changes influence a woman's susceptibility to HIV infection. The longitudinal nature of this study allows us to control for measured and unmeasured biases that occur in cross sectional studies that are the predominant study type in the field. Although we excluded women from participation with clinical evidence of any infection at baseline, several women had asymptomatic infections diagnosed or acquired infections over the course of the study. As individuals with sexually transmitted infections, bacterial vaginosis, and recent semen exposure, factors independently known to alter HIV susceptibility, were not excluded from this analysis, but rather the time-varying presence of these exposures were controlled for in our final models, we feel these findings are likely more representative of realworld findings. Although BV, STIs and semen may modify HIV susceptibility, larger studies are needed for adequate power to analyze the potential effect modification of these risk factors. Although heterogeneity in the endogenous hormonal response to the contraceptive is possible and we did not measure and control for endogenous hormonal levels, we selected to include 2 time points post initiation to help control for some of that variability. As there are known variations in local immune factors with these infections, the inclusion of these women may have contributed to reduced power for detecting a difference in some study outcomes. As women are self-selected, individual differences that could underlie differential responses to contraceptive exposure may limit the generalizability of our results. Importantly, given the wide range of variability in the number of cells from the CVL that are collected, we are evaluating the proportion of cells expressing different cellular markers and not the number of total cells present that express these markers. Lastly, as we are also reporting on markers of HIV susceptibility, any extrapolation to qualify the degree that these factors may alter true susceptibility is limited. There are multiple benefits of contraceptives beyond fertility-control including reduced abortion, maternal and neonatal morbidity and HIV perinatal transmission. Our findings relating Eng-Implant with HIV susceptibility markers are subtle with unclear clinical impact, and consistent with the results ECHO trial findings. Informed decision-making must include information about the superior typical-use effectiveness of long-acting reversible contraceptive methods, such as the Eng-Implant. Additionally, informed consent requires that we share information on the lack of clear evidence on increased risk of HIV susceptibility with all hormonal contraceptive methods with the promotion of dual method use with condoms.
2020-03-29T07:15:42.597Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "3633f2fecc93339a6a2e3270dc38950f478bbac2", "oa_license": "CC0", "oa_url": "https://doi.org/10.1371/journal.pone.0230473", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9def368e511262b16addae963dd4bbca76a8e59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221844581
pes2o/s2orc
v3-fos-license
CD36 facilitates fatty acid uptake by dynamic palmitoylation-regulated endocytosis Fatty acids (FAs) are essential nutrients, but how they are transported into cells remains unclear. Here, we show that FAs trigger caveolae-dependent CD36 internalization, which in turn delivers FAs into adipocytes. During the process, binding of FAs to CD36 activates its downstream kinase LYN, which phosphorylates DHHC5, the palmitoyl acyltransferase of CD36, at Tyr91 and inactivates it. CD36 then gets depalmitoylated by APT1 and recruits another tyrosine kinase SYK to phosphorylate JNK and VAVs to initiate endocytic uptake of FAs. Blocking CD36 internalization by inhibiting APT1, LYN or SYK abolishes CD36-dependent FA uptake. Restricting CD36 at either palmitoylated or depalmitoylated state eliminates its FA uptake activity, indicating an essential role of dynamic palmitoylation of CD36. Furthermore, blocking endocytosis by targeting LYN or SYK inhibits CD36-dependent lipid droplet growth in adipocytes and high-fat-diet induced weight gain in mice. Our study has uncovered a dynamic palmitoylation-regulated endocytic pathway to take up FAs. Comments: 1-In all figures involving microscopy: More than one cell should be shown in the figure panels and some form of quantification should be included. With all adipocyte image panels considering the difficulty of showing internalization in these cells as a result of the large lipid droplets, including a membrane marker could be helpful. For example in figure 1e the C16 and C20:4 images show what appears to be relatively little CD36 internalization while the blot in panel f shows that CD36 internalization was identical with all FA tested. In addition, it would be helpful to include a movie in the supplement on the 3D Z-stack reconstruction. Figure 1 surface biotinylation is used to monitor membrane depletion of CD36 and Cav1. Why is Cav1 biotinylated since it is present on the internal leaflet of the membrane? As a result how quantitative is Cav1 biotinylation. Considering that adipocytes have abundant Cav1 expression it is hard to imagine Cav1 depletion by CD36 internalization. The authors should address this technically. In addition they should provide other membrane proteins as controls in the western blots. Do the adipocytes show fewer caveolae at 1h after FA. Maybe electron microscopy would be helpful. Figure 2, the PacFA is used to show that the FA is co-localized with CD36 and co-localization is shown to average 60% (panel b). However, the images presented appear to show less than 60% localization, suggesting that more cells need to be shown and more than 10 cells should be counted. -There are two controls that need to be provided: Show that PacFA is similar to native FA in terms of processing by adipocytes, and include data on specificity of the click reaction in targeting FA interacting proteins (and not proteins that are not thought to interact with FA). -It would also be informative here to include CD36-/-adipocytes. Figure 2 panel d: why is a 1h timepoint used for oleate uptake when CD36 membrane depletion is almost complete at that time. In addition, figures 5 and 6 show that many of the steps involved in CD36 internalization occur within 5min. What is the early time course of FA uptake? Is the labeled FA assay being used only as a marker of lipid droplets? 5-In figure 3 panels c and d, the efficiency of APT1 depletion in adipocytes should be shown. The related data presented in the supplement are not sufficiently convincing. Show more cells in panel b and strengthen the documentation of internalization as mentioned under comment 1. 4-In 6-In figures 5 and 6 more immunostained cells should be shown. Figure legends state that the experiments were repeated at least twice. Some quantification of internalization is needed. Again uptake is measured at 1h which seems to be out of sync with the events regulating endocytosis. A better time course of endocytosis should be presented. -In figure 6 it is presumed that depalmitoylation of CD36 and recruitment of Syk are occurring at the membrane since they are proposed to be important for endocytosis to occur. Can Syk translocation and recruitment be directly documented? 7- Figure 7: The in vivo data are difficult to interpret as showing the effect of blocking FA uptake in adipocytes. No data are presented to confirm that food consumption and absorption (postprandial lipids) are similar between groups. Circulating lipid levels are needed. If these parameters are the same then where is the lipid going? Liver, muscle? In addition body composition would be nice to confirm that the change in weight is due to impaired adiposity. 8-In the discussion, the authors should provide some explanation of how SYK, VAV and JNK initiate endocytosis. minor comment: what do the authors mean by "binding of FAs to CD36 activates its downstream kinase LYN, thereby passing the signal from the cell surface to the intracellular depots"? Reviewer #2 (Remarks to the Author): This study addresses the mechanism and regulation of fatty acid uptake by CD36. The authors demonstrate that removal of CD36 from the cell surface is stimulated in the presence of fatty acids in a process that is dependent upon caveolin. The authors uncover a cycle of palmitoylation and depalmitoylation mediated respectively, by the palmitoyltransferase DHHC5 and the depalmitoylase APT1, that is required for CD36 internalization and fatty acid uptake. The authors identify two tyrosine kinases, Lyn and Syk, that are activated downstream of fatty acid binding to CD36. Lyn phosphorylates DHHC5, enabling depalmitoylation of CD36 and subsequent recruitment of Syk, which in turn recruits the cellular machinery required for internalization of CD36. The physiological relevance of this new pathway is demonstrated by showing the effects of an endocytosis block on lipid droplet growth in adipocytes and high-fat-diet induced weight gain in mice. The study will be of interest in the fields of protein lipidation and lipid metabolism. The authors uncover the mechanism that underlies a regulatory cycle of palmitoylation and depalmitoylation with an important physiological impact. Overall, the study is technically sound and in most cases, the data support the conclusions drawn by the authors. However, there are some shortcomings that should be addressed. These are outlined as follows. 1. The authors identify APT1 as the depalmitoylase of CD36. This was somewhat surprising in that APT1 is reported to be localized in the mitochondria (Kathayat RS et al. Nat Commun. 2018 9:334. doi: 10.1038/s41467-017-02655-1.) Candidate depalmitoylases other than APT1 were excluded only on the basis of CD36 localization after knockdown of individual depalmitoylases. A more rigorous approach to testing for the involvement of other depalmitoylases should be taken. 2. In some cases, poor image quality precludes the reader from assessing the data. This is true for Figure 3h, Figure 4h. Better images should be presented. Figure 5b, how expression of SRC family kinases was assessed should be stated or provided in the Methods section. 5. Cell numbers were not provided for the quantitation of Figure 5j. Images are of poor quality. More information on how BODIPY intensity is quantified should be provided in the methods section. In 6. Knockdown of Src-family kinases should be confirmed by showing reduced protein levels -at least for Lyn kinase. Reviewer #1 (Remarks to the Author): This study is a nice follow up on a previous report (reference 20) by the same group of investigators documenting importance of CD36 palmitoylation in fatty acid (FA) uptake. In the present manuscript, the authors show that in adipocytes FA internalize CD36 from caveolae together with Cav1 and that dynamic CD36 depalmitoylation is needed Response: Thank you for your suggestions. Our original thought to show one cell per image was to make the endocytosis more visible. According to the suggestion, we have re-performed all the immunostaining assay and included 2 cells or more in each image. To better illustrate the results, we have also enlarged one cell in each image in the revised manuscript. To show the extent of internalization quantitatively, we quantified the surface content of CD36 in the surface biotinylation assay. Each quantification was performed from 3 individual experiments, and the quantified data are now included in the revised manuscript. As the reviewer suggested, we screened some of the reported plasma membrane markers and found that ATP1A1, a subunit of Na + /K + -ATPase, showed a good immunostaining signal and it stayed on the plasma membrane under both basal and FAtreated conditions. We therefore used ATP1A1 as a plasma membrane marker and have clearly demonstrated that CD36 is indeed internalized after fatty acid treatment. In Fig. 1a and Fig. 1c where we performed immunostaining using anti-CD36 and anti-CAV1 antibodies, as the primary antibodies of anti-CAV1 and anti-ATP1A1 are both rabbit antibodies, we could not immunostain ATP1A1 at the same time. For the rest of the panels, we immunostained both CD36 and ATP1A1, and we could clearly see the fatty acid-induced endocytosis. As suggested, we have now included a movie in the supplement on the 3D Z-stack reconstruction. Figure 1 surface biotinylation is used to monitor membrane depletion of CD36 Response: Thank you for the question. As the reviewer pointed out, CAV1 is in the inner leaflet of caveolaes, and CD36 is mainly on the outer membrane. It is known that the caveolaes are detergent resistant; therefore, when we solubilize the cells in 1% Triton in surface biotinylation as descried in the Method session, the caveolae structures will not get disrupted. In that way, when we pulled down biotinylated CD36, CAV1 will also get pulled down. Although we have stated this clearly in the Methods, to avoid any unnecessary confusion, we decided to take CAV1 out from the surface biotinylation results. 2-In In terms of including a plasma marker as a control for Western blots, we examined the surface content of ATP1A1 by the surface biotinylation assay. Consistent with the immunostaining results, surface content of ATP1A1 did not change before and after fatty acid treatment. We have now included ATP1A1 as a plasma marker in all the surface biotinylation experiments. As suggested, we performed electron microscopy. Consistent with our immunostaining results, the caveolaes were mainly localized on the plasma membrane in BSA-treated cells, but the majority of them were internalized 1 hr after oleate treatment. The figure has now been included in Supplementary Fig. 1b. Figure 2, the PacFA is used to show that the FA is co-localized with CD36 and colocalization is shown to average 60% (panel b). However, the images presented appear to show less than 60% localization, suggesting that more cells need to be shown and more than 10 cells should be counted. 3-In -There are two controls that need to be provided: Show that PacFA is similar to native FA in terms of processing by adipocytes, and include data on specificity of the click reaction in targeting FA interacting proteins (and not proteins that are not thought to interact with FA). -It would also be informative here to include CD36-/-adipocytes. Response: Thank you for your suggestions. We have re-performed the experiment and quantified colocalization of PacFA and CD36 in 24 cells. As shown in the new Fig. 2b, the co-localization is still about 56%. We have also replaced Fig. 2a with two cells. In the original paper about PacFA, Haberkant et al. has clearly demonstrated that PacFA acts similar to native FA and can be incorporated into diacylglycerol, triacylglycerol, cholesterol ester and phospholipids in HeLa and CHO cells (Angew Chem Int Ed Engl, 2013, 52, 4033-4038). We did try to perform similar experiments by ordering the reagents including the 3-azido-7-hydroxycoumarin that is a key reagent in the assay. Due to the breakout of COVID-19, although we submitted the order in early March, the reagent has not come in yet. Considering that the fatty acid esterification and triglyceride and phospholipid synthesis enzymes are well evolutionarily conserved from yeast to human, it is reasonable to believe that PacFA will act similar to native FA in adipocytes too. We therefore ask not to perform the experiment. To add control on the specificity of the click reaction in targeting FA interacting proteins, we tested whether PacFA would bind fatty acid binding protein 4 (FABP4), a known FA interacting protein. As shown in Fig 2c, we found that FABP4 could also be pulled down by PacFA, which confirms that the click chemistry reaction is specific. In terms of performing PacFA treatment in Cd36 -/adipocytes, we need to mention that the major purpose of using PacFA was to demonstrate the co-migration of FAs with CD36. This experiment was not designed for quantification purpose, as UV crosslinking and click chemistry could possibly generate too much variance in the quantitative assay. To quantify FA uptake, we used 3 H-oleate, which is more quantitative. As shown in Fig. 2d, Cd36 -/adipocytes show about 40% decrease in FA uptake activity. If we perform the experiment, Cd36 -/adipocytes were expected to show PacFA uptake signal, but not much would be learned from this experiment. As this would not be a quantitative assay and we have performed FA uptake using other methods, we therefore ask not to perform the experiment. Figure 2 panel Response: Thank you for the question. In Fig. 1a,b and supplementary Fig. 1c, we have clearly demonstrated that the internalization of CD36 did not start until 30 min after oleate treatment. Therefore, in order to measure CD36-mediated endocytosis of fatty acids, it would be better to analyze CD36-mediated endocytosis of FAs at least 30 min after 3 H-oleate treatment. We have actually performed a time course curve of 3 H-oleate uptake and found fatty acid uptake was linear in 1 hr (See the figure below). That is why we chose the 1-hr timepoint for the 3 H-oleate uptake experiment. 4-In As the reviewer pointed out, the LYN-SYK-JNK signaling started within 5 min of oleate treatment (Fig. 5 and Fig. 6), the endocytosis did not start until 30 min. The delay in endocytosis is likely because it takes time to convert the signaling pathway into endocytosis, which requires cytoskeleton reorganization and pinch off of the caveolae from the plasma membrane. As mentioned in Comment#3, PacFA was mainly used to indicate the co-migration of CD36 and FAs, not for quantification purpose. Response: Thank you for your suggestions. We have now included the knockdown efficiency of APT1 in Fig. 3c and 3d. We have also performed more experiments to strength the supplementary data. First, we replace the immunostaining images in supplementary Fig. 3a and 3h to two or more cells per image. Second, we tested the effect of all of the 5 reported depalmitoylases on CD36 palmitoylation, and found that only APT1 could dramatically decrease the palmitoylation of CD36. Combined the other experiments in Fig. 3, our data are convincing that APT1 is the depalmitoylase of CD36. For Fig. 3b and 3e, we have performed new experiment using ATP1A1 to indicate the plasma membrane, and included two cells in these two panels. We have also quantified the surface content of CD36, and the results are now included in Supplementary Fig. 3i and 3j. 6-In figures 5 and 6 more immunostained cells should be shown. Figure legends state that the experiments were repeated at least twice. Some quantification of internalization is needed. Again uptake is measured at 1h which seems to be out of sync with the events regulating endocytosis. A better time course of endocytosis should be presented. -In figure 6 it is presumed that depalmitoylation of CD36 and recruitment of Syk are occurring at the membrane since they are proposed to be important for endocytosis to occur. Can Syk translocation and recruitment be directly documented? Response: Thank you for your suggestions. We have performed new experiment and included two cells in each panel in Fig. 5 and Fig. 6. We have also quantified the surface content of CD36 by surface biotinylation assay and included them in Supplementary Fig. S5 and S6. In terms of choosing the 1-hr timepoint, please refer to our response to comment #4. As the reviewer suggested, we tried to document the translocation of SYK to the membrane. As the SYK antibody we had was not good for immunostaining, we isolated membrane fraction before and 5 min after oleate treatment and detected SYK by Western blot. We found that SYK was not detected at the membrane fractions before oleate treatment, but it was readily detected 5 min after treatment. We have now included the data in Supplementary Fig. 6d. In addition body composition would be nice to confirm that the change in weight is due to impaired adiposity. Response: Thank you for your suggestions. We had the data of food intake and plasma lipid levels. Neither bafetinib or entospletinib treatment had any effect on the food intake, plasma free fatty acid and plasma triglyceride in WT or Cd36 -/mice. We have now included the data in Supplementary Fig. 9b, d,e. To figure out where the lipid is going, we measured liver content of triglyceride and found that both compounds slightly but significantly increased liver triglyceride levels in WT mice, suggesting that some of the lipids were ectopically stored in liver. These data are now included in Supplementary Fig. 9f. In terms of adiposity, we actually had the data included in the original manuscript ( Supplementary Fig. 8b, now Supplementary Fig. 9c). The fat mass was significantly lower in WT mice treated with either bafetinib or entospletinib. Therefore, the decrease in body weight was due to impaired adiposity. Response: Thank you for your suggestions. We have added the following sentences to explain the roles of VAV and JNK. "VAV function as an adaptor of dynamin 32, 33 , which facilitates the pinching off of caveolaes from the plasma membrane 21 . The activated JNK plays an important role in regulating cytoskeleton re-organization and vesicle transport 52 , two key events caveolar endocytosis.". Thank you, and we apologize for the confusion about the way we wrote up the sentence. We have now changed the sentence to "binding of FAs to CD36 activates its downstream kinase LYN, thereby converting the extracellular stimulus of FAs into intracellular signaling pathway". Reviewer #2 (Remarks to the Author): This study addresses the mechanism and regulation of fatty acid uptake by CD36. The authors demonstrate that removal of CD36 from the cell surface is stimulated in the presence of fatty acids in a process that is dependent upon caveolin. The authors uncover a cycle of palmitoylation and depalmitoylation mediated respectively, by the palmitoyltransferase DHHC5 and the depalmitoylase APT1, that is required for CD36 internalization and fatty acid uptake. The authors identify two tyrosine kinases, Lyn and The study will be of interest in the fields of protein lipidation and lipid metabolism. The authors uncover the mechanism that underlies a regulatory cycle of palmitoylation and depalmitoylation with an important physiological impact. Overall, the study is technically sound and in most cases, the data support the conclusions drawn by the authors. However, there are some shortcomings that should be addressed. These are outlined as follows. As shown in Supplementary Fig. 3c-g, only APT1, but not the others, dramatically reduced the palmitoylation of CD36. These results further strengthen our point that APT1 is the depalmitoylases of CD36. 2. In some cases, poor image quality precludes the reader from assessing the data. This is true for Figure 3h, Figure 4h. Better images should be presented. Response: Thank you, and we apologize for that. For Fig. 3h, the images were indeed too small in the previous submission. We have now enlarged the images in the revised manuscript, and we can clearly see the difference in BODIPY staining in different images. For Fig. 4h, we performed new experiments and replaced the figure. We can clearly see that WT DHHC5, but not Y91E or Y91F mutant, promotes FA uptake in DHHC5 knockdown cells. Response: Thank you for your suggestion. As the reviewer suggested, we examined the activity of Y91E and Y91F mutants of DHHC5 in palmitoylating Flotillin-2. As shown in Supplementary Fig. 4, the Y91E mutant showed much decreased activity in palmitoylating Flotillin-2, consistent with our observation using CD36 as the substrate. These results further confirm that phosphorylation at Y91 inactivates DHHC5. Figure 5b, how expression of SRC family kinases was assessed should be stated or provided in the Methods section. In Response: Thank you for your careful reading. The expression of the SRC family kinases were analyzed by a RNA-Seq study in white adipose tissue. We have now stated it in the manuscript as "Images were taken under a Zeiss LSM-780 confocal microscopy using an excitation wavelength of 488 nm with the same laser intensity. None of the images were overexposed. 5. Cell numbers were not provided for the quantitation of Figure 5j. Images are of poor quality. More information on how BODIPY intensity is quantified should be provided in the methods section. Response: Thank you for your suggestions. We quantified 20 cells per treatment in Fig. 5j, and we have included this in the figure legends. Again, we apologize for making Figure 5 h too small in the previous submission. We have now enlarged the images and we can clearly see the difference in the intensities of BODIPY in different images. We have also added more details of how to quantify BODIPY intensity in the methods section as "Images were taken under a Zeiss LSM-780 confocal microscopy using an excitation wavelength of 488 nm with the same laser intensity. None of the images were overexposed.". Knockdown of Src-family kinases should be confirmed by showing reduced protein levels -at least for Lyn kinase. Response: Thank you for your suggestion. We have confirmed the knockdown efficiency of LYN by Western blot and updated them in Fig. 5f and 5g.
2020-09-23T13:06:08.049Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "8cccfa70bd48cbbeedb2005d69a22b6163144331", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-020-18565-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd71498ae102d13492fb618b7d82b46443091e88", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
118798069
pes2o/s2orc
v3-fos-license
Theory of supercurrent transport in SIsFS Josephson junctions We present the results of theoretical study of Current-Phase Relations (CPR) in Josephson junctions of SIsFS type, where 'S' is a bulk superconductor and 'IsF' is a complex weak link consisting of a superconducting film 's', a metallic ferromagnet 'F' and an insulating barrier 'I'. We calculate the relationship between Josephson current and phase difference. At temperatures close to critical, calculations are performed analytically in the frame of the Ginsburg-Landau equations. At low temperatures numerical method is developed to solve selfconsistently the Usadel equations in the structure. We demonstrate that SIsFS junctions have several distinct regimes of supercurrent transport and we examine spatial distributions of the pair potential across the structure in different regimes. We study the crossover between these regimes which is caused by shifting the location of a weak link from the tunnel barrier 'I' to the F-layer. We show that strong deviations of the CPR from sinusoidal shape occur even in a vicinity of Tc, and these deviations are strongest in the crossover regime. We demonstrate the existence of temperature-induced crossover between 0 and pi states in the contact and show that smoothness of this transition strongly depends on the CPR shape. I. INTRODUCTION Josephson structures with a ferromagnetic layer became very active field of research because of the interplay between superconducting and magnetic order in a ferromagnet leading to variety of new effects including the realization of a π-state with phase difference π in the ground state of a junction, as well as long-range Josephson coupling due generation of oddfrequency triplet order parameter [1][2][3] . Further interest to Josephson junctions with magnetic barrier is due to emerging possibilities of their practical use as elements of a superconducting memory 4−12 , on-chip πphase shifters for self-biasing various electronic quantum and classical circuits 13−16 , as well as ϕbatteries, the structures having in the ground state phase difference ϕ g = ϕ, (0 < |ϕ| < π) between superconducting electrodes [17][18][19][20][21][22][23][24][25] . In standard experimental implementations SFS Josephson contacts are sandwich-type structures 26−27 . The characteristic voltage V C = J C R N (J C is critical current of the junction, R N is resistance in the normal state) of these SFS devices is typically quite low, which limits their practical applications. In SIFS structures 28−32 containing an additional tunnel barrier I, the J C R N product in a 0-state is increased 9 , however in a π-state V C is still too small 33,34 due to strong suppression of the superconducting correlations in the ferromagnetic layer. Recently, new SIsFS type of magnetic Josepshon junction was realized experimentally [9][10][11][12] . This structure represents a connection of an SIs tunnel junction and an sFS contact in series. Properties of SIsFS structures are controlled by the thickness of s layer d s and by relation between critical currents J CSIs and J CsFS of their SIs and sFS parts, respectively. If the thickness of s-layer d s is much larger than its coherence length ξ S and J CSIs ≪ J CsFS , then characteristic voltage of an SIsFS device is determined by its SIs part and may reach its maximum corresponding to a standard SIS junction. At the same time, the phase difference ϕ in a ground state of an SIsFS junction is controlled by its sFS part. As a result, both 0-and π-states can be achieved depending on a thickness of the F layer. This opens the possibility to realize controllable π junctions having large J C R N product. At the same time, being placed in external magnetic field H ext SIsFS structure behaves as a single junction, since d s is typically too thin to screen H ext . This provides the possibility to switch J C by an external field. However, theoretical analysis of SIsFS junctions was not performed up to now. The purpose of this paper is to develop a microscopic theory providing the dependence of the characteristic voltage on temperature T , exchange energy H in a ferromagnet, transport properties of FS and sF interfaces and thicknesses of s and F layers. Special attention will be given to determining the current-phase relation (CPR) between the supercurrent J S and the phase difference ϕ across the structure. II. MODEL OF SISFS JOSEPHSON DEVICE We consider multilayered structure presented in Fig.1a. It consists of two superconducting electrodes separated by complex interlayer including tunnel barrier I, intermediate superconducting s and ferromagnetic F films. We assume that the conditions of a dirty limit are fulfilled for all materials in the structure. In order to simplify the problem, we also assume that all superconducting films are identical and can be described by a single critical temperature T C and coherence length ξ S . Transport properties of both sF and FS interfaces are also assumed identical and are characterized by the interface parameters Here R BF and A B are the resistance and area of the sF and FS interfaces ξ S and ξ F are the decay lengths of S and F materials while ρ S and ρ F are their resistivities. Under the above conditions the problem of calculation of the critical current in the SIsFS structure reduces to solution of the set of the Usadel equations 35 . For the S layers these equations have the form 1-3 Here Ω = T (2n + 1)/T C are Matsubara frequencies normalized to πT C , and D S,F , are diffusion coefficients in S and F metals, respectively. Pair potential ∆ m and the Usadel functions Φ m and Φ F in (2) -(4) are also normalized to πT C . To write equations (2) -(4), we have chosen the x axis in the directions perpendicular to the SI, FS and sF interfaces and put the origin at sF interface. Equations (2) -(4) must be supplemented by the boundary conditions 36 . At x = −d s they can be written as where γ BI = R BI A B /ρ S ξ S , R BI and A B are resistance and area of SI interface. At x = 0 the boundary conditions are Far from the interfaces the solution should cross over to a uniform current-carrying superconducting state 37 resulting in order parameter phase difference across the structure equal to Here ϕ(∞) is the asymptotic phase difference across the junction, ∆ 0 is modulus of order parameters far from the boundaries of the structure at a given temperature, u = 2mv s ξ S , m is the electron mass and v s is the superfluid velocity. Note that since the boundary conditions (5) -(6) include the Matsubara frequency Ω, the phases of Φ S functions depend on Ω and are different from the phase of the pair potential ∆ S at the FS interfaces χ(d F ) and χ(0). Therefore it is the value ϕ(∞) rather than ϕ = χ(d F ) − χ(0), that can be measured experimentally by using a scheme compensating the linear in x part in Eq. (11). The boundary problem (2)-(11) can be solved numerically making use of (8), (10). Accuracy of calculations can be monitored by equality of currents J S calculated at the SI and FS interfaces and in the electrodes. In the further analysis carried out below we limit ourselves to the consideration of the most relevant case of lowtransparent tunnel barrier at SI interface γ BI ≫ 1. (13) In this approximation, the junction resistance R N is fully determined by the barrier resistance R BI . Furthermore the current flowing through the electrodes can lead to the suppression of superconductivity only in the vicinity of sF and FS interfaces. That means, up to terms of the order of γ −1 BI we can neglect the effects of suppression of superconductivity in the region x ≤ −d s and write the solution in the form Here without any lost of generality we put χ(−∞) = χ(−d s − 0) = 0 (see Fig. 1c). Substitution of (14) into boundary conditions (5) gives Further simplifications are possible in a several limiting cases. III. THE HIGH TEMPERATURE LIMIT T ≈ T C In a vicinity of critical temperature the Usadel equations in the F layer can be linearized. Writing down their solution in the analytical form and using the boundary conditions (6), (7) on sF and FS interfaces we can reduce the problem to the solution of Ginzburg-Landau (GL) equations in the s and S layers. We limit our analysis by considering the most interesting case when the following condition is fulfilled: and when there is strong suppression of superconductivity in the vicinity of the sF and FS interfaces. The latter takes place if the parameter Γ satisfies the conditions Here Note that in the limit h = H/πT C ≫ 1 and d F ≫ 2/hξ F the sums in (19), (20) can be evaluated analytically resulting in In general, the phases of the order parameters in s and S films are functions of the coordinate x. In the considered approximation the terms that take into account the coordinate dependence of the phases, are proportional to small parameters (Γq) −1 and (Γp) −1 and therefore provide small corrections to the current. For this reason, in the first approximation we can assume that the phases in superconducting electrodes are constants independent of x. In the further analysis we denote the phases at the s-film by χ and at the right S-electrode by ϕ (see Fig.1c). The details of calculations are summarized in the Appendix A. These calculations show that the considered SIsFS junction has two modes of operation depending on relation between s layer thickness d s and the critical thickness d sc = (π/2)ξ S (T ). For d s larger than d sc , the s-film keeps its intrinsic superconducting properties (mode (1)), while for d s ≤ d sc superconductivity in the s-film exists only due to proximity effect with the bulk S electrodes (mode (2)). A. Mode (1): SIs + sFS junction d s ≥ d sc We begin our analysis with the regime when the intermediate s-layer is intrinsically superconducting. In this case it follows from the solution of GL equations that supercurrent flowing across SIs, sF and FS interfaces (J(−d s ), J(0) and J(d F ), respectively) can be represented in the form (see Appendix A) where ∆ 0 = 8π 2 T C (T C − T )/7ζ (3) is bulk value of order parameter in S electrodes, A B is cross sectional area of the structure, ζ (z) is Riemann zeta function. Here are the order parameters at sF and FS interfaces, respectively (see Fig. 1b) and where δ s (−d s ) is the solution of transcendental equation Here, K(z), is the complete elliptic integral of the first kind. Substitution of δ s (−d s ) = 0 into Eq. (28) leads to the expression for critical s layer thickness, d sc = (π/2)ξ S (T ), which was used above. For the calculation of the CPR we need to exclude phase χ of the intermediate s layer from the expressions for the currents (23), (24). The value of this phase is determined from the condition that the currents flowing across Is and sF interfaces should be equal to each other. For large thickness of the middle s-electrode (d s ≫ d sc ) the magnitude of order parameter δ s (−d s ) is close to that of a bulk material ∆ 0 and we may put a = −b in Eqs. (25) and (26) resulting in together with the equation to determine χ From (29), (30) and (31) it follows that in this mode SIsFS structure can be considered as a pair of SIs and sFS junctions connected in series. Therefore, the properties of the structure are almost independent on thickness d s and are determined by a junction with smallest critical current. Indeed, we can conclude from (31) that the phase χ of s layer order parameter depends on the ratio of the critical current, I CSIs ∝ Γ −1 BI , of its SIs part to that, I CsFS ∝ |β |Γ −1 , of the sFS junction. The coefficient β in (31) is a function of F layer thickness, which becomes close to unity in the limit of small d F and exhibits damped oscillations with d F increase (see analytical expression for β (21)). That means that there is a range of thicknesses, d Fn, determined by the equation β = 0, at which J S ≡ 0 and there is a transition from 0 to π state in sFS part of SIsFS junction. In other words, crossing the value d Fn with an increase of d F provides a π shift of χ relative to the phase of the S electrode. In Fig.2 we clarify the classification of operation modes and demonstrate the phase diagram in the (d s , d F ) plane, which follows from our analytical results (21) thickness d sc = πξ S (T )/2 correspond to the mode (2) with fully suppressed superconductivity in the s layer. Conversely, the top part of diagram corresponds to s-layer in the superconductive state (mode (1)). This area is divided into two parts depending on whether the weak place located at the tunnel barrier I (mode(1a)) or at the ferromagnetic F-layer (mode(1b)). The separating black solid vertical lines in the upper part in Fig.2 represent the locus of points where the critical currents of SIs and sFS parts of SIsFS junction are equal. The dashed lines give the locations of the points of 0 to π transitions, d Fn = π(n − 3/4)ξ F 2/h, n = 1, 2, 3..., at which J s = 0. In a vicinity of these points there are the valleys of mode (1b) with the width, ∆d Fn ≈ ξ F ΓΓ −1 BI h −1/2 exp{π(n − 3/4)}, embedded into the areas occupied by mode (1a). For the set of parameters used for calculation of the phase diagram presented in Fig.2, there is only one valley with the In the experimentally realized case 8-11 Γ −1 BI ≪ |β |Γ −1 the condition is fulfilled and the weak place in SIsFS structure is located at the SIs interface. In this approximation it follows from (31) that in 0-state (d F < d F1 ) and . Substitution of these expressions into (30) results in for 0-and πstates, respectively. It is seen that for d F < d F1 the CPR (32) has typical for SIS tunnel junctions sinusoidal shape with small correction taking into account the suppression of superconductivity in the s layer due to proximity with FS part of complex sFS electrode. Its negative sign is typical for the tunnel Josephson structures with composite NS or FS electrodes 39,40 . For d F > d F1 the supercurrent changes its sign thus exhibiting the transition of SIsFS junction into π state. It's important to note that in this mode the SIsFS structure may have almost the same value of the critical current both in 0 and π states. It is unique property, which can not be realized in SFS devices studied before. For this reason we have identified this mode as "Switchable 0 − π SIS junction". Mode (1b): sFS junction Another limiting case is realized under the condition Γ −1 BI ≫ |β |Γ −1 . It fulfills in the vicinity of the points of 0− to π− transitions, d Fn , and for large d F values and high exchange fields H. In this mode (see Fig. 2) the weak place shifts to sFS part of SIsFS device and the structure transforms into a conventional SFS-junction with complex SIs electrode. In the first approximation on Γ/(β Γ BI ) ≫ 1 it follows from (30), (31) that The shape of CPR for χ → 0 coincides with that previously found in SNS and SFS Josephson devices 37 . It transforms to the sinusoidal form for sufficiently large thickness of F layer. For small thickness of the F-layer as well as in the vicinity of 0 − π transitions, significant deviations from sinusoidal form may occurred. Transition between the mode (1a) and the mode (1b) is also demonstrated in Fig.3. It shows dependence of critical current J C across the SIsFS structure versus F-layer thickness d F . The inset in Fig.3 demonstrates the magnitude of an order parameter at Is interface as a function of d F . The solid lines in Fig.3 give the shape of J C (d F ) and δ 0 (−d s ) calculated from (32)- (33). These equations are valid in the limit d s ≫ d sc and do not take into account possible suppression of superconductivity in a vicinity of tunnel barrier due to proximity with FS part of the device. The dashed lines are the result of calculations using analytical expressions (23)- (28) for the thickness of the s-layer d s = 2ξ s (T ), which slightly exceeds the critical one, d sc = (π/2)ξ s (T ). These analytical dependencies are calculated at T = 0.9 T C for H = 10πT C , Γ BI = 200, Γ = 5, γ B = 0. The short-dashed curves are the results of numerical calculations performed selfconsistently in the frame of the Usadel equations (2)-(11) for corresponding set of the parameters T = 0.9 T C for H = 10πT C , γ BI = 1000, γ = 1, γ B = 0.3 and the same thickness of the s layer d sc = (π/2)ξ s (T ). Interface parameters γ BI = 1000, γ = 1 are chosen the same as for the analytical case. The choice of γ B = 0.3 allows one to take into account the influence of mismatch which generally occurs at the sF and FS boundaries. It can be seen that there is a qualitative agreement between the shapes of the three curves. For small d F the structure is in the 0-state mode (1a) regime. The difference between dashed and short dashed lines in this area is due to the fact that the inequalities (18) are not fulfilled for very small d F . The solid and short dashed curves start from the same value since for d F = 0 the sFS electrode becomes a single spatially homogeneous superconductor. For d s = 2ξ s (T ) the intrinsic superconductivity in the s layer is weak and is partially suppressed with d F increase (see the inset in Fig.3). This suppression is accompanied by rapid drop of the critical current. It can be seen that starting from the value d F ≈ 0.4ξ F our analytical formulas (23)-(28) are accurate enough. The larger d s , the better agreement between numerical and analytical results due to the better applicability of the GL equations in the s layer. With further d F increase the structure passes through the valley of mode (1b) state, located in the vicinity of the 0 to π transition, and comes into the π−state of the mode (1a). B. Mode (2): SInFS junction d s ≤ d sc For d s ≤ d sc intrinsic superconductivity in the s layer is completely suppressed resulting in formation of the complex -InF-weak link area, where 'n' marks the intermediate s film in the normal state. In this parameter range the weak is always located in the tunnel barrier and the CPR has sinusoidal shape In a vicinity of the critical thickness, d s d sc , the factor cos(d s /ξ S (T )) in (34) is small and supercurrent is given by the expression Further decrease of d s into the limit d s ≪ d sc leads to The magnitude of critical current in (36) is close to that in the well-known case of SIFS junctions in appropriate regime. C. Current-Phase Relation In the previous section we have demonstrated that the variation in the thickness of the ferromagnetic layer should lead to the transformation of CPR of the SIsFS structure. Fig.4a illustrates the J C (d F ) dependencies calculated from expressions (23)-(28) at T = 0.9T C for H = 10πT C , γ B = 0, Γ BI ≈ 200 and Γ ≈ 5 for two thickness of the s layer d s = 5ξ S (T ) (solid line) and d s = 0.5ξ S (T ) (dashed line). In Figs.4b-d we enlarge the parts of J C (d F ) dependence enclosed in rectangles labeled by letters b, c and d in Fig.4a and mark by digits the points where the J S (ϕ) curves have been calculated. These curves are marked by the same digits as the points in the enlarge parts of J C (d F ) dependencies. The dashed lines in the Figs.4b-d are the loci of critical points at which the J S (ϕ) dependence reaches its maximum value, J C (d F ). Figure 4b presents the mode (1b) valley, which divides the mode (1a) domain into 0-and πstates regions. In the mode (1a) domain the SIsFS structure behaves as SIs and sFS junctions connected in series. Its critical current equals to the minimal one among the critical currents of the SIs (J CSIs ) and sFS (J CsFS ) parts of the device. In the considered case the thickness of the s film is sufficiently large to prevent suppression of superconductivity. Therefore, J CSIs does not change when moving from the point 1 to the point 2 along J C d F dependence. At the point 2, when J CSIs = J CsFS , we arrive at the border between the mode (1a) and mode (1b). It is seen that at this point there is maximum deviation of J S (ϕ) from the sinusoidal shape. Further increase of d F leads to 0-π transition, when parameter β in (33) becomes small and J S (ϕ) practically restores its sinusoidal shape. Beyond the area of 0 to π transition, the critical current changes its sign and CPR starts to deform again. The deformation achieves its maximum at the point 7 located at the other border between the modes (1a) and (1b). The displacement from the point 7 to the point 8 along the J C (d F ) dependence leads to recovery of sinusoidal CPR. Figure 4c presents the transition from the π-state of mode (1a) to mode (1b) with d F increase. It is seen that the offset from the point 1 to the points 2 − 5 along J C (d F ) results in transformation of the CPR similar to that shown in Fig.4b during displacement in the direction from the point 1 to the points 2 − 6. The only difference is the starting negative sign of the critical current. However this behavior of CPR as well as close transition between modes lead to formation of the well pronounced kink at the J C (d F ) dependence. Furthermore, contrary to Fig.4b at the point 6, the junction is still in the mode (1b) and remains in this mode with further increase in d F . At the point 6 the critical current achieves its maximum value and it decreases along the dashed line for larger d F . Figure 4d shows the transformation of the CPR in the vicinity of the next 0 to π transition in mode (1b). There is small deviation from sinusoidal shape at the point 1, which vanishes exponentially with an increase of d F . In the mode (2) (the dashed curve in Fig.4a) an intrinsic superconductivity in the s layer is completely suppressed resulting in the formation of a complex -InF-weak link region and the CPR becomes sinusoidal (34). IV. ARBITRARY TEMPERATURE At arbitrary temperatures the boundary problem (2)- (11) goes beyond the assumptions of GL formalism and requires self-consistent solution. We have performed it numerically in terms of the nonlinear Usadel equations in iterative manner. All calculations were performed for T = 0.5T C , ξ S = ξ F , γ BI = 1000, γ BFS = 0.3 and γ = 1. Calculations show that at the selected transparency of tunnel barrier (γ BI = 1000) the suppression of superconductivity in the left electrode is negligibly small. This allows one to select the thickness of the left S electrode d SL = 2ξ S without any loss of generality. On the contrary, proximity of the right S electrode to the F layer results in strong suppression of superconductivity at the FS interface. Therefore the pair potential of the right S electrode reaches its bulk value only at thickness d SR 10ξ S . It is for these reasons we have chosen d SR = 10ξ S for the calculations. Furthermore, the presence of a low-transparent tunnel barrier in the considered SIsFS structures limits the magnitude of critical current J C by a value much smaller compared to a depairing current of the superconducting electrodes. This allows one to neglect nonlinear corrections to coordinate dependence of the phase in the S banks. The results of calculations are summarized in Fig.5. Figure 5a shows the dependence of J C of the SIsFS structure on the F-layer thickness d F for relatively large d s = 5ξ S (solid) and small d s = 0.5ξ S (dashed) s-film thickness. The letters on the curves indicate the points at which the coordinate dependencies of the magnitude of the order parameter, |∆(x)|, and phase difference across the structure, χ, have been calculated for the phase difference ϕ = π/2. These curves are shown in the panels b)-f) of the Fig.5 as the upper and bottom plots, respectively. There is direct correspondence between the letters, b, c, d, e, f, on J C (d F ) curves and the labels, b), c), d), e), f), of the panels. It is seen that qualitative behavior of the J C (d F ) dependence at T = 0.5T C remains similar to that obtained in the frame of the GL equations for T = 0.9T C (see Fig.4a). Furthermore, the modes of operation discussed above remain relevant too. The panels b)-f) in Fig.5 make this statement more clear. At the point marked by letter 'b', the s-film is sufficiently thick, d s = 5ξ S , while F film is rather thin, d F = 0.3ξ F , and therefore the structure is in 0-state of the mode (1a). In this regime the phase mainly drops across the tunnel barrier, while the phase shifts at the s-film and in the S electrodes are negligibly small(see the bottom plot in Fig.5b). At the point marked by the letter 'c' (d s = 5ξ S , d F = ξ F ), the structure is in the πstate of the mode (1a). It is seen from Fig.5c that there is a phase jump at the tunnel barrier and an additional π-shift occurs between the phases of S and s layers. For d F = 3ξ F (Fig.5d) the position of the weak place shifts from SIs to sFS part of the SIsFS junction. Then the structure starts to operate in the mode (1b). It is seen that the phase drop across SIs part is small, while ϕ − χ ≈ π/2 across the F layer, as it should be in SFS junctions with SIs and S electrodes. At the points marked by the letters 'e' and 'f', thickness of the s-layer d s = 0.5ξ S is less than its critical value. Then su- perconductivity in the s-spacer is suppressed due to the proximity with the F film and SIsFS device operates in the mode (2). At d F = ξ F (the dot 'e' in Fig.5a and the panel Fig.5e) the position of the weak place is located at the SIs part of the structure and there is additional π-shift of phase across the F film. As a result, the SIsFS structure behaves like an SInFS tunnel π-junction. Unsuppressed residual value of the pair potential is due to the proximity with the right S-electrode and it disappears with the growth of the F-layer thickness, which weakens this proximity effect. At d F = 3ξ F (Fig.5f) weak place is located at the F part of IsF trilayer. Despite strong suppression of the pair potential in the s-layer, the distribution of the phase inside the IsF weak place has rather complex structure, which depends on thicknesses of the s and F layers. A. Temperature crossover from 0 to π states The temperature-induced crossover from 0 to π states in SFS junctions has been discovered in 26 in structures with sinusoidal CPR. It was found that the transition takes place in a relatively broad temperature range. Our analysis of SIsFS structure (see Fig.6a) shows that smoothness of 0 to π transition strongly depends on the CPR shape. This phenomenon was not analyzed before since almost all previous theoretical results were obtained within a linear approximation leading in a sinusoidal CPR. To prove the statement, we have calculated numerically the set of J C (T ) curves for a number of F layer film thicknesses d F . We have chosen the thickness of intermediate superconductor d S = 5ξ S in order to have SIsFS device in the mode(1a) and we have examined the parameter range 0.3ξ F ≤ d F ≤ ξ F , in which the structure exhibits the first 0 to π transition. The borders of the d F range are chosen in such a way that SIsFS contact is either in 0-(d F = 0.3ξ F ) or π-(d F = ξ F ) state in the whole temperature range. The corresponding J C (T ) dependencies (dashed lines in Fig. 6a) provide the envelope of a set of J C (T ) curves calculated for the considered range of d F . It is clearly seen that in the vicinity of T C the decrease of d F results in creation of the temperature range where 0-state exists. The point of 0 to π transition shifts to lower temperatures with decreasing d F . For d F 0.5ξ F the transition is rather smooth since for T ≥ 0.8T C the junction keeps the mode (2) (with suppressed superconductivity) and deviations of the CPR from sin(ϕ) are small. Thus the behavior of J C (T ) dependencies in this case can be easily described by analytic results from Sec.III C. The situation drastically changes at d F = 0.46ξ F (shortdashed line in Fig.6a). For this thickness the point of 0 to π transition shifts to T ≈ 0.25T C . This shift is accompanied by an increase of amplitudes of higher harmonics of CPR (see Fig.6b). As a result, the shape of CPR is strongly modified, so that in the interval 0 ≤ ϕ ≤ π the CPR curves are characterized by two values, J C1 and J C2 , as is known from the case of SFcFS constrictions 41 . In general, J C1 and J C2 differ both in sign and magnitude and J C = max(|J C1 | , |J C2 |). For T > 0.25T C the junction in the 0-state and J C grows with decrease of T up to T ≈ 0.5T C . Further decrease of T is accompanied by suppression of critical current. In a vicinity of T ≈ 0.25T C the difference between |J C1 | , and |J C2 | becomes negligible and the system starts to develop the instability that eventually shows up as a sharp jump from 0 to π state. After the jump, |J C | continuously increases when T goes to zero. It is important to note that this behavior should always be observed in the vicinity of 0 − π transition, i.e. in the range of parameters, in which the amplitude of the first harmonic is small compared to higher harmonics. However, the closer is temperature to T c , the less pronounced are higher CPR harmonics and the smaller is the magnitude of the jump. This fact is illustrated by dash-dotted line showing J C (T ) calculated for d F = 0.48ξ F . The jump in the curves calculated for d F ≥ 0.5ξ F also exists, but it is small and can not be resolved on the scale used in the Fig. 6a. At d F = 0.45ξ F (dash-dot-dotted line in Fig. 6) the junction is always in the 0-state and there is only small suppression of critical current at low temperatures despite the realization of non-sinusoidal CPR. Thus the calculations clearly show that it's possible to realize a set of parameters of SIsFS junctions where thermallyinduced 0-π crossover can be observed and controlled by temperature variation. B. 0 to π crossover by changing the effective exchange energy in external magnetic field Exchange field is an intrinsic microscopic parameter of a ferromagnetic material which cannot be controlled directly by application of an external field. However, the spin splitting in F-layers can be provided by both the internal exchange field and external magnetic field 42,43 , resulting in generation of effective exchange field, which equals to their sum. However, practical realization of this effect is a challenge since it is difficult to fulfill special requirements 42,43 on thickness of S electrodes and SFS junction geometry. Another opportunity can be realized in soft diluted ferromagnetic alloys like Fe 0.01 Pd 0.99 . Investigations of magnetic properties 44 of these materials have shown that below 14 K they exhibit ferromagnetic order due to the formation of weakly coupled ferromagnetic nanoclusters. In the clusters, the effective spin polarization of Fe ions is about 4µ B , corresponding to that in the bulk Pd 3 Fe alloy. It was demonstrated that the hysteresis loops of Fe 0.01 Pd 0.99 films have the form typical to nanostructured ferromagnets with weakly coupled grains (the absence of domains; a small coercive force; a small interval of the magnetization reversal, where the magnetization changes its direction following the changes in the applied magnetic field; and a prolonged part, where the component of the magnetization vector along the applied field grows gradually). Smallness of concentration of Pd 3 Fe clusters and their ability to follow variation in the applied magnetic field may result in generation of H e f f , which is of the order of . Here n is concentration of electrons within a physically small volume V , in which one performs an averaging of Greens functions in the transformation to a quasiclassical description of superconductivity, n ↑,↓ and V ↑,↓ are the values describing spin polarized parts of n and parts of volume V, which they occupy, respectively. Similar kind of H e f f nucleates in NF or SF proximity structures, which are composed from thin layers [45][46][47][48] . There is an interval of applied magnetic fields H ext where the alloy magnetization changes its direction and the concentrations n ↑,↓ depend on a pre-history of application of the field 10,12 , providing the possibility to control H e f f by an external magnetic field. Derivation of possible relationships between H e f f and H ext is outside of the scope of this paper. Below we will concentrate only on an assessment of the intervals in which H e f f should be changed to ensure the transition of SIsFS device from 0 to π state. To do this, we calculate the J C (H) dependencies presented in Fig.7. The calculations have been done for the set of structures with d F = 2ξ F and s-films thickness ranging from thick one, d S = 5ξ S (solid line) up to an intermediate value d S = 2ξ S (dashed-dotted line) and finishing with thin film having d S = 0.5ξ S (dashed line). It is clearly seen that these curves have the same shape as J C (d F ) dependencies presented in the Sec.III. For d S = 5ξ S and H 7πT C the magnitude of J C is practically independent on H, but it changes the sign at H ≈ 1.25πT C due to 0 -π transition. It is seen that for the transition, while maintaining the normalized current value at a level close to unity, changes of H are required approximately of the order of 0.1πT C or 10%. For d S = 2ξ S and H 3πT C , it is necessary to change H on 20% to realize the such a transition. In this case the value of normalized current is at the level 0.4. In mode 2 the transition requires 100% change of H, which is not practical. V. DISCUSSION We have performed a theoretical study of magnetic SIsFS Josephson junctions. At T ≤ T C calculations have been performed analytically in the frame of the GL equations. For arbitrary temperatures we have developed numerical code for selfconsistent solution of the Usadel equations. We have outlined several modes of operation of these junctions. For slayer in superconducting state they are S-I-sfS or SIs-F-S devices with weak place located at insulator (mode (1a)) and at the F-layer (mode (1b)), respectively. For small s-layer thickness, intrinsic superconductivity in it is completely suppressed resulting in formation of InF weak place (mode (2)). We have examined the shape of J S (ϕ) and spatial distribution of the module of the pair potential and its phase difference across the SIsFS structure in these modes. For mode (1) the shape of the CPR can substantially differ from the sinusoidal one even in a vicinity of T C . The deviations are largest when the structure is close to the crossover between the modes (1a) and (1b). This effect results in the kinks in the dependencies of J C on temperature and on parameters of the structure (thickness of the layers d F , d s and exchange energy H) as illustrated in Fig.4 on J C (d F ) curves. The transformation of CPR is even more important at low temperatures. For T 0.25T C a sharp 0 − π transition can be realized induced by small temperature variation (Fig.6). This instability must be taken into account when using the structures as memory elements. On the other hand, this effect can be used in detectors of electromagnetic radiation, where absorption of a photon in the F layer will provide local heating leading to development of the instability and subsequent phonon registration. We have shown that suppression of the order parameter in the thin s-film due to the proximity effect leads to decrease of J C R N product in both 0− and π−states. On the other hand, the proximity effect may also support s-layer superconductivity due to the impact of S electrodes. In mode (1a) J C R N product in 0-and π-states can achieve values typical for SIS tunnel junctions. In mode (2) sinusoidal CPR is realized. Despite that, the distribution of the phase difference χ(x) in the IsF weak place may have a complex structure, which depends on thickness of the s and F layers. These effects should influence the dynamics of a junction in its ac-state and deserve further study. Further, we have also shown that in mode (1a) nearly 10% change in the exchange energy can cause a 0 − π transition, i.e. changing the sign of J C R N product, while maintaining its absolute value. This unique feature can be implemented in mode (1a), since it is in it changes of the exchange energy only determine the presence or absence of a π shift between s and S electrodes and does not affect the magnitude of the critical current of SIs part of SIsFS junction. In mode (1b), the F layer becomes a part of weak link area. In this case the π shift, initiated by the change in H must be accompanied by changes of J C magnitude due to the oscillatory nature of superconducting correlations in the F film. The latter may lead to very complex and irregular dependence of J C (H ext ), which have been observed in Nb-PdFe-Nb SFS junctions(see Fig.3 in 8 ). Contrary to that the J C (H ext ) curves of SIsFS structure with the same PdFe metal does not demonstrate these irregularities 10,11 . To characterize a junction stability with respect to H variations it is convenient to introduce the parameter η = (dJ C /J C )/(dH/H) which relates the relative change in the critical current to the relative change in the exchange energy. The larger the magnitude of η the more intensive irregularities in an SFS junction are expected with variation of H. In the Fig.8 we compare the SIsFS devices with conventional SFS, SIFS and SIFIS junctions making use of two the most important parameters: the instability parameter η and J C R N product, the value, which characterizes high frequency properties of the structures. The calculations have been done in the It can be seen that the presence of two tunnel barriers in SIFIS junction results in the smallest J C R N and strong instability. The SIFS and SIsFS structures in the mode (2) demonstrate better results with almost the same parameters. Conventional SFS structures have two times smaller J C R N product, having higher critical current but lower resistivity. At the same time, SFS junctions are more stable due to the lack of low-transparent tunnel barrier. The latter is the main source of instability due to sharp phase discontinuities at the barrier 'I'. Contrary to the standard SFS, SIFS and SIFIS junctions, SIsFS structures achieve J C R N and stability characteristics comparable to those of SIS tunnel junctions. This unique property is favorable for application of SIsFS structures in superconducting electronic circuits.
2013-10-01T05:03:52.000Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "825f36e367c9f85eb7ceb4d13b9e7b22b4613238", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1310.0142", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "825f36e367c9f85eb7ceb4d13b9e7b22b4613238", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218890236
pes2o/s2orc
v3-fos-license
Endolymphatic exclusion for the treatment of pediatric chylous ascites secondary to neuroblastoma resection: report of two cases Chylous ascites is a rare, but highly morbid complication of oncologic resection, often associated with retroperitoneal lymphadenectomy. Conservative measures with total parenteral nutrition or lipid-reduced formulas constitute the initial mainstay therapy, but not without risks and failures. This report describes 2 endolymphatic treatment strategies for iatrogenic chylous ascites following neuroblastoma resection. Lymphatic leaks were identified using intranodal lymphangiography, targeted with cone-beam computed tomographic guidance, and embolized with n-butyl cyanoacrylate. There were no adverse outcomes, with complete resolution of chylous ascites and a mean follow-up of 26 months. Despite its overall low incidence, chylous ascites results in significant morbidity. The loss of chyle, which is rich in proteins, lipids, immunoglobulins, electrolytes, and vitamins, depletes many elements that are vital to normal physiology [4] . Nutritional and electrolyte deficiencies require total parenteral nutrition and fluid replenishment. Loss of lymphocytes and immunoglobulins contributes to immunosuppression [5] . There are also reports of diminished bioavailability of certain drugs with chyle losses [6][7][8] . These complications may extend hospital stays, increase postoperative mortality, and delay or preclude adjuvant chemotherapy [9] . Conservative treatment measures, including enteral feeding, total parenteral nutrition, and somatostatin analogs, constitute the initial medical management. Reported success rates of these conservative methods range from 60% to 100% over 2-6 weeks of treatment [9 ,10] . Major drawbacks to these approaches include complications of long-term central venous access, malnutrition, neurological developmental deficits from fatty acid deficient diet, and extended hospitalizations [9 ,11-13] . Various endolymphatic interventions offer a less invasive approach for definitive management of refractory chylous ascites [14][15][16] . These described interventions have, however, been largely reported in adults. This report describes 2 pediatric patients with persistent iatrogenic chylous ascites that were successfully managed with endolymphatic embolization. Patient 1 A 5-year-old male patient presented with 1 month of periorbital ecchymosis. Diagnostic evaluation revealed a 7.6 × 7.1 × 9.5 cm right adrenal mass, pancytopenia, sphenoid and occipital bone lesions, and bone marrow infiltration consistent with high-risk neuroblastoma. After his fourth cycle of chemotherapy per ANBL0532, he underwent right adrenalectomy, retroperitoneal lymphadenectomy, and nonsegmental liver resection. Large volume chylous ascites developed 2 weeks after surgery upon resolution of ileus and initiation of total parenteral nutrition. Paracentesis was performed 1 month post resection with removal of 2 L of fluid, notable for triglyceride level of 310 mg/dL (normal reference range < 110 mg/dL). The ascites rapidly reaccumulated despite initiation and escalation of octreotide infusion (up to 8 mcg/kg/hr), medium chain triglyceride formula and, later, nil per os status. A second paracentesis was performed 2 weeks later with removal of 3.6 L of fluid. The patient was referred for lymphatic imaging and intervention. Under general anesthesia, a paracentesis was performed with removal of 3 L of fluid. Conventional bilateral inguinal node lymphangiography was performed with ethiodized oil, revealing bilateral foci of retroperitoneal extravasation at L3-L4 ( Fig. 1 ). The patient was repositioned prone. Foci of extravasation were targeted percutaneously with 22-gauge needles using cone beam computed tomography (CBCT) with navigational overlay (XperGuide, Philips). Each site was embolized with 1.5 mL of a 1:1 mixture of n-butyl cyanoacrylate (n-BCA) to ethiodized oil until retrograde filling of the supplying retroperitoneal lymphatic channels was observed. The patient underwent a subsequent paracentesis 3 days later with removal of 3 L fluid. He was weaned from octreotide and transitioned from parenteral nutrition to an unrestricted diet over the subsequent 6 weeks. A final paracentesis, 1 month after the lymphatic embolization, was performed with removal of 1.9 L fluid. Ascites has not recurred since. With subsequent therapies including tandem autologous stem cell transplantation, radiation, and immunotherapy the patient achieved complete disease remission 15 months postpresentation, and remains disease free at 45 months postpresentation. Patient follow-up from time of intervention is currently 40 months. Patient 2 A 1-year-old boy was presented with pathologic wrist and shoulder fractures and was found to have a 12.2 × 8.5 × 8.5 cm left adrenal mass and diffuse osseous involvement. Operative biopsy confirmed favorable histology, N-Myc-amplified neuroblastoma. After his fifth cycle of chemotherapy per ANBL1531 Arm A, he underwent resection of the left adrenal mass with extensive lymphadenectomy around the aorta, superior mesenteric artery, celiac axis, and left renal artery and vein at 17 months of age. Large volume ascites accumulated 3 weeks postoperatively with the advancement of diet. Paracentesis yielded 0.6 L of grossly chylous fluid (triglycerides 6785 mg/dL). Ascites recurred despite the patient being made nil per os , and he received a second paracentesis with removal of 1 L of fluid. Five weeks postresection, he was referred for lymphatic imaging and intervention. Under general anesthesia, a paracentesis was performed with removal of 1 L of fluid. Conventional bilateral inguinal node lymphangiography was performed with ethiodized oil, revealing unilateral focus of retroperitoneal extravasation at L2 ( Fig. 2 ). CBCT of the pelvis was performed, characterizing a left lateral external iliac chain lymph node with efferent drainage to the site of extravasation. The node was targeted percutaneously with a 25-gauge needle using CBCT with navigational overlay. After efferent drainage to the site of extravasation was again confirmed, embolization was performed into and across the extravasation using 0.5 mL of a 1:3 mixture of n-BCA to ethiodized oil. He remained on total parenteral nutrition for 1 additional week and subsequently was advanced to an unrestricted diet over 1 week without recurrence of ascites. The patient went on to receive hematopoietic stem cell transplant. At the time of this report, he has undergone his fifth cycle of immunotherapy. Patient follow-up from time of intervention is currently 12 months. Discussion This description of 2 successful endolymphatic interventions for chylous ascites following neuroblastoma resection high- Fig. 1 -Frontal fluoroscopic image following bilateral inguinal access and lymphangiography using ethiodized oil (A) demonstrated two foci of lymphatic extravasation in the retroperitoneum (arrows). Following prone positioning, foci of extravasation were targeted using cone beam CT guidance (B). Lymphatic fluid draining from the access needles was noted (C). Each site was embolized using cyanoacrylate (D) (arrowheads). lights a minimally invasive treatment option for this morbid condition. Each case demonstrated complete resolution of ascites for a mean follow-up of 26 months. No procedure-related complications were seen. In both cases, a lymphatic leak was identified prior to targeted embolization. In larger case series, lymphatic leaks were identified in 55%-75% of patients [16 ,17] . Comparatively, prior reports on surgical intervention demonstrated identification in 80% of patients [18 ,19] . The use of ethiodized oil during lym-phangiography has been shown to have a therapeutic effect, likely due to an inflammatory or embolic effect [17 ,20] . With this in mind, overall clinical success rates of these less invasive modalities can approach 90% [16 ,17] . Multiple previous reports demonstrate the feasibility and safety of embolizing lymph nodes or lymphatic networks with embolics such as n-BCA glue [14 ,21-25] . The n-BCA is mixed with ethiodized oil at various ratios to control the rate of polymerization and downstream propagation. Excessive down- Fig. 2 -Frontal fluoroscopic image following bilateral inguinal access and lymphangiography using ethiodized oil (A) demonstrated a single focus of lymphatic extravasation in the left retroperitoneum (arrow). Cone beam CT was performed (C), confirming the focus of extravasation (arrow) and further characterizing retroperitoneal lymphatic anatomy. A left lateral external iliac chain node was identified and targeted using cone beam CT guidance with navigational overlay (XperGuide, Philips) (C). Following contrast confirmation of inline drainage to the focus of extravasation, embolization was performed using cyanoacrylate (D) with preservation of the right-ided lymphatics. stream embolization must be avoided to prevent obstruction of normal central conducting channels such as cisterna chyli and thoracic duct. Targeting lymph nodes or channels in close proximity to the injury minimizes excessive embolization of upstream structures and theoretical risk of lymphedema. Additionally, administration of ethiodized oil should be limited to 0.25 mL/kg in children to avoid possible adverse outcomes such as pulmonary oil embolism [26 ,27] . Compared to thoracic duct embolization, embolization of retroperitoneal and mesenteric lymphatic injuries creates several technical challenges. Relative to the typical access channel in these interventions, the cisterna chyli, the involved ducts may be very small caliber, particularly in a pediatric patient. Accessing cisterna chyli with sufficiently caudal angulation for retrograde wire advancement introduced morbidity associated with transthoracic approaches. A technique of inferior thoracic duct embolization followed by retrograde reflux of sclerosant has been reported [28] . Percutaneous hepatic lymphatic access, transcervical thoracic duct arch access and endovascular entry through the venolym-phatic junction may afford the retrograde access trajectory needed for infradiaphragmatic lymphatic interventions [29] . In this report, we described both direct access to the injury itself with refluxing embolization as well as upstream node access with downstream embolization with CBCT guidance to address the chylous ascites while minimizing nontarget lymphatic embolization, including preservation of the thoracic duct. These cases exemplify the feasibility and efficacy of endolymphatic interventions in iatrogenic pediatric chylous ascites. More studies are warranted to establish standardized techniques and long-term safety.
2020-05-21T09:19:41.827Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "edc1eb2f0626de02bc4064eb2b4f4c672f13e3bd", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2020.04.060", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd2a1b490afc69ac77690f406186394e46061fde", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16062864
pes2o/s2orc
v3-fos-license
R-symmetric gauge mediation We present a version of Gauge Mediated Supersymmetry Breaking which preserves an R-symmetry - the gauginos are Dirac particles, the A-terms are zero, and there are four Higgs doublets. This offers an alternative way for gauginos to acquire mass in the supersymmetry-breaking models of Intriligator, Seiberg, and Shih. We investigate the possibility of using R-symmetric gauge mediation to realize the spectrum and large sfermion mixing of the model of Kribs, Poppitz, and Weiner. Motivation Supersymmetry is one of the most studied ideas for physics at the LHC. Supersymmetric phenomenology is usually described by the minimal supersymmetric standard model (MSSM) and its variations (xMSSM's), obtained either by adding extra states, usually gauge singlets, or by focusing on certain regions of parameter space. It was only recently realized [1] that a new universality class of supersymmetric particle physics models, characterized by an extra R-symmetry-which can be continuous or discrete (⊇ Z 4 ), exact or approximate-is not only phenomenologically viable, but also helps to significantly alleviate the supersymmetric flavor problem and has novel signatures at the TeV scale. A model with an exact R-symmetry, called the "Minimal R-symmetric Supersymmetric Standard Model" (MRSSM) was constructed in [1]. It was shown, somewhat unexpectedly, that with the imposition of the new symmetry significant flavor violation in the sfermion sector is allowed by the current data, even for squarks and sleptons with mass of a few hundred GeV, provided the Dirac gauginos are sufficiently heavy, while the flavor-singlet supersymmetric CP-problem is essentially absent. Stronger bounds on the allowed flavor violation, obtained by including the leading-log QCD corrections, were subsequently given in [2]. The Dirac nature of gauginos and higgsinos and the possibility of large sfermion flavor violation in the MRSSM both present a departure from usual supersymmetric phenomenology. The analysis of the MRSSM in [1] was performed in the framework of an effective supersymmetric theory with the most general soft terms respecting the R-symmetry. The place of this model in a grander framework, including the breaking and mediation of supersymmetry, was not addressed in detail. The purpose of this paper is to investigate a possible ultraviolet completion of the MRSSM in the framework of gaugemediated supersymmetry breaking, with the hope that an ultraviolet completion will help narrow the choice of parameters of the effective field theory analysis. The focus of this paper on gauge mediation is motivated by several recent observations. First of all, phenomenological studies [3] of the MRSSM have shown that Dirac charginos are typically the next-to-lightest supersymmetric particles (NLSPs) in the visible sector. This points toward a possible small scale of supersymmetry breaking, with the resulting light gravitino allowing a decay channel of the light charginos. Secondly, it has been known [4] for a while that models with non-generic superpotentials can have both broken supersymmetry and unbroken R-symmetry. More recently, Intriligator, Seiberg, and Shih (ISS) [5] observed that metastable supersymmetrybreaking and R-preserving vacua in supersymmetric gauge theories are, in a colloquial sense, quite generic. Majorana gaugino masses require breaking of the R-symmetry; instead we explore the possibility that the gauginos are Dirac and the R-symmetry is unbroken. Combined with the fact that these vacua can preserve large nonabelian flavor symmetries, it makes sense to use ISS models to build R-symmetric models of direct mediation of supersymmetry breaking. The MRSSM For completeness, here we recall the main features of the MRSSM as an effective softly broken supersymmetric extension of the Standard Model (SM) with an R symmetry. The most important difference from the MSSM are the extended gauge and Higgs sectors and the R-charge assignments. The quarks and leptons of the SM and their superpartners are described by R-charge 1 chiral superfields, while the R-charges of the two higgs doublet superfields, H u and H d are zero. To allow for R-symmetric gaugino masses, SM-adjoint chiral superfields, Φ 1,2,3 , of R-charge 0 are introduced. An additional pair of Higgs doublets, R u and R d , of R-charge 2 are needed to allow R-symmetric µ u,d -terms. The R symmetry forbids the new Higgs fields from coupling to SM matter through renormalizable operators. While we will refer to U(1) R as the "R-symmetry," we should stress that for most phenomenological purposes a Z 4 subgroup suffices, while a Z 6 is sufficient to forbid soft dimension-5 operators violating baryon and/or lepton number such as QQQL and QQQR u . The MSSM µ-term is forbidden by the U(1) R , and there are new terms in the superpotential allowed by U(1) R and the SM gauge symmetry: The allowed R-symmetric soft terms are: soft scalar masses, Dirac gaugino masses (combining the Weyl gauginos of the gauge supermultiplets with the fermion components of the SM adjoint chiral superfields), holomorphic and nonholomorphic masses for the scalar components of Φ 1,2,3 , and the usual B µ h u h d term; the MSSM A-terms and Majorana gaugino masses are forbidden. As explained in [1], the Dirac nature of gauginos, the absence of A-terms, and the extended Higgs sector-all features following from the R-symmetry-can combine to address flavor problems in various regions of tanβ. Outline In this paper, we present a model that uses direct gauge mediation and the metastable solution of ISS to generate the MRSSM. In the next section, we will discuss the relevant details of the ISS model and how it can be used to generate direct gauge mediation with an R symmetry. We will also introduce notation for computing masses that will be used throughout the paper. In Section 3 we will consider how to use the model presented in Section 2 to generate soft terms in the visible sector. This section is divided into two parts: contributions from the cutoff scale UV physics, and direct contributions from the messenger sector, that we call "IR contributions". At this stage we will also discuss a generalization of the model where we identify the important features of the metastable ISS solution and consider how these essential features can be extracted in a general, phenomenologically viable way. Then, in Section 4, we present some examples of qualitatively different spectra, and discuss constraints such as perturbativity and tuning. A thourough study of the phenomenology of these models, such as the details of the EWSB sector, collider signals, dark matter, etc., are left for future work. ISS and R-symmetric direct gauge mediation Direct gauge mediation postulates that the SM gauge group G SM is part of the global symmetry of the supersymmetry breaking sector, thus relaxing the need to have a separate messenger sector of supersymmetry breaking. Dynamical models of direct mediation have been considered in the past, see e.g. [6,7]. The ISS models [5] of metastable supersymmetry breaking are attractive setups for constructing models of gauge mediation, particularly in the R-symmetric setup. As we shall see in this paper, using ISS as an illustrative example of an R-symmetric supersymmetry-breaking/mediation sector will teach us some general lessons on R-symmetric mediation; these open the way for the future study of more general models with different phenomenology. The "electric" (high-energy) ISS model is supersymmetric QCD with gauge group SU(N c ) and N f flavors of quarks Q andQ, with a tree level superpotential: W el. = Tr m QQ. (2.1) The dual "magnetic" (low-energy) theory has gauge group SU(N c ), N c = N f −N c , N f flavors of magnetic quarks q,q, gauge-singlets M, transforming as (N f ,N f ) under the flavor group, and superpotential: where the dots denote nonperturbatively generated terms (that are not important in the metastable supersymmetry breaking vacuum) and Λ is the duality scale. As ISS show, there exists a metastable supersymmetry-breaking vacuum in this theory, since the equation of motion for M following from (2.2):q i · q j = Λm i j (the dot denotes summation over the gauge indices) can not be satisfied, for N f > N c and a mass matrix of maximal rank N f , due to the rank condition . The flavor symmetry preserved by the mass terms in (2.2) is broken in the supersymmetry breaking vacuum, while an R-symmetry, under which M has R-charge 2 and q,q have R-charge 0, remains unbroken. That the R-symmetry is unbroken follows from the Coleman-Weinberg calculation of [5], which shows what while the dual quarks get expectation values, the trace of M, which is a classical flat direction, does not. R-symmetry breaking is needed to obtain Majorana gaugino masses. Thus, a lot of the model building using ISS and other supersymmetry-breaking models has focused on breaking the R symmetry, either explicitly or spontaneously; for example [8][9][10][11][12][13][14][15][16][17][18][19][20]. As described in the Introduction, in light of the recent observations of [1] on the interesting phenomenological features of supersymmetric models with unbroken R symmetry, we explore here the contrary possibility. We build R-symmetric models of direct gauge mediation, where gauginos are Dirac, and study their phenomenological consequences. The supersymmetry-breaking/mediation sector To be more concrete, we consider a simple ISS model allowing for direct gauge mediation. 1 For simplicity, we take N c = 1, N f = 6 (N c = 5), as done by [11]. The "magnetic" dual theory is then an O'Raifertaigh model. The supersymmetry-breaking vacuum has a reduced vectorlike global symmetry SU(6) V → SU(5) V due to the vevs of the dual squark fields q andq. We will describe the model in terms of a set of fields with definite quantum numbers under the unbroken SU(5) V , related to the ones in (2.2) as follows: In addition to the fields in (2.3), as we will shortly explain, our model will also require the introduction of two other fields which transform as adjoints under SU(5) V and carry vanishing R-charge. We will call these fields M ′ and Φ. In what follows, it will only be necessary for Φ to be an adjoint under G SM rather than the full SU(5) V symmetry (Φ will be used to give Dirac masses to the gauginos). This avoids the need for "bachelor" fields of [21]; they can be added with minimal trouble, but in the spirit of minimization of the model, we will leave them out. The charges of the various superfields of the supersymmetry-breaking/mediation sector under the global SU(5) V symmetry of the ISS model, the U(1) R symmetry, and a residual U(1) global symmetry (which is spontaneously broken by the dual squark vevs) are given in Table 1. The spontaneous breaking of SU(6) V → SU(5) V in the ISS model will leave behind a massless Nambu-Goldstone (NG) boson in the messenger sector. However, the gauging of G SM explicitly breaks the the full SU(6) V and the NG boson will acquire a mass. Since the symmetry is broken in this way we consider the more general case where we "tilt" the couplings in the superpotential in eqn. (2.2) so that the SU(6) V symmetry is explicitly broken, keeping certain ratios of couplings fixed as would be the case for the gauging of G SM , e.g. κ in W magn below. Finally, the most general nontrivial tilting of the superpotential that is consistent with the remaining symmetries is where: is the (tilted) ISS superpotential from Equation (2.2), while are additional terms, which explicitly break the global U(1) of Table 1. 2 The couplings in W 1 are needed to generate Dirac gaugino mass. For now, we simply postulate a C-parity, defined in the caption to Table 1, which explains the relative minus sign in (2.6); we will come back to this point below. Notice that we can recover the SU(6) V limit by setting κ = κ ′ = ω = 1. By rephasing fields it is possible to take all the parameters in (2.5) and (2.6) to be real, which we do in the following. 2 In a complete SU (6) V description this term can be thought of originating from a termq[Φ, M]q in the magnetic superpotential (2.2), withΦ being an extension of Φ to SU (6) V , similar to the relation between M and M in (2.3); this allows following Seiberg duality for the construction of the corresponding electric theory, if such a thing is desired. However, for the purposes of this paper we will simply treat these terms as additional terms allowed by symmetries, making no assumptions as to the origin of the y couplings. Scales of supersymmetry breaking and mediation The F -term equations at the SUSY breaking metastable minimum of (2.5) give: 3 We also find ϕ = φ = N = N = X = 0, all with masses near f . The other fields are stabilized at higher order in the loop expansion, as we will see below. At the minimum (2.7, 2.8) the scalar mass squared terms are: and fermion masses are: Notice that all the masses can be scaled to depend on two variables: and we can define an overall messenger mass scale: which is independent of SUSY breaking. From the above mass matrices, we see that the N,N scalars as well as the two fermion messengers all have mass M mess . The mass eigenstates of the upper 2 × 2 block of the scalar mass matrix are: 13) 3 Notice that canonically normalizing Tr M would require a factor of √ 5 be introduced in Equation (2.8), as in [11]. This factor can be reabsorbed into our definition of ω and not doing so only serves to clutter the notation, so we will not include it here. Omitting this factor has no effect on the low-energy phenomenology of the model. and have mass squareds: (2.14) Hence, to avoid tachyons, we require z ≤ 1. In fact, z = 1 is the SU(6) V limit where there is a massless messenger, as we can see explicitly from (2.14). Further in our analysis, we will take z ∼ 0.9. We note, from (2.14), that there is a significant mass hierarchy in the messenger sector for small breaking of SU(6) V . The F -term conditions (2.7) and (2.8) do not fix the vacuum expectation values of M and ψ,ψ: At one loop, following the calculation of ISS, we find M = η = 0 with masses one order of magnitude down from the messenger scale. This leaves the massless field ξ, which is the NG boson of the spontaneously broken U(1) of Table 1. The Yukawa terms in W 1 break the U(1) symmetry, hence this NG boson will get a mass starting at two loops due to the diagram shown in Figure 1. In the Appendix, we calculate the diagram and show that it generates a potential for ξ: with positive µ 2 given in (A.1). Thus, the minimum of the potential is at ξ = 0, leading to the conclusion that the C-symmetry of the model is not spontaneously broken, and a mass of ξ: where H(z) is given in (A.3). Here we simply note that H(1) = 2π 2 /3 and vanishes in the supersymmetric (z = 0) limit. Finally, we discuss the remaining messenger fermions. These come with a mass matrix where v ∼ M mess is given by (2.7) and we have set η = ξ = 0. This matrix can be diagonalized with the result that the X fermion has mass mX = v and the ψ,ψ fields mix maximally, one getting a mass √ 2v and the other with vanishing mass. This spectrum is not a surprise: the X fermion, having R = +1, can only mix with the goldstino, the fermionic partner of TrM; one of these fermions marries the gravitino, while the other has a mass ∼ M mess . The ψ,ψ fermions each have R = −1 and can mix. That there is a massless fermion is not surprising, since the ψ,ψ sector contains the pseudo-NG boson discussed above, and by supersymmetry this must come with a massless fermion (notice that there is no supersymmetry breaking in these fermion masses). The ψ,ψ superfields can couple to the SM fermions starting at two loops, with gauge fields and messengers in the loops; however, since these operators are generated by gauge bosons the operator is flavor diagonal. These can then generate four-fermion (flavor conserving) operators that, thanks to supersymmetry, are finite and small, with any divergent loop integrals cut off by the ξ mass (2.18). Such massless fermions might have some interesting phenomenological or cosmological consequences; from the R symmetry they can only be pair-produced. We will not say any more about them here. This completes the discussion of the spectrum in the messenger sector. We may now discuss the phenomenology of the visible sector. Before doing so we comment on a few technical features of our model. Dirac gaugino masses, C-parity, and the extra adjoints Generating a Dirac gaugino mass requires a chirality flip on a fermion line as explained in Section 3.2. This can only come from a superpotential fermion mass term and requires the sum of the R-charges of the fields involved to be 2. The mass of the scalar involved in the loop must be different from the fermion-if not there is a cancellation and the gaugino mass is zero. This SUSY breaking splitting must come from offdiagonal terms in the scalar mass matrix, otherwise the supertrace is non-zero and there will logarithmic divergences in the scalar masses [22]. Only scalar fields of zero R-charge can have these off diagonal mass terms. Hence in order to generate a nonzero Dirac gaugino mass in R-symmetric gauge mediation one needs fields with both R-charge 2 and R-charge 0. The model discussed here is of this general form: the ϕ andφ have zero R-charge and acquire off diagonal masses (2.9), while the fields N andN have R-charge 2 and supply the needed chirality flip. We will discuss a more general version of this model, involving fewer adjoints in Section 3.2. Recall now that in two component notation, Dirac gaugino mass terms are where λ is a Weyl fermion in the adjoint of the gauge group, part of the N = 1 vector multiplet, and ψ is the Weyl fermion component of a chiral supermultiplet Φ, also in the adjoint of the gauge group. In addition to preserving an R-symmetry, Dirac (2.20), as opposed to Majorana, gaugino masses are odd in the gaugino field λ. Hence, they change sign under C-parity if only λ is C-odd. Note that this already implies that Dirac gaugino masses can not be generated by coupling the adjoint field M from the supersymmetry breaking sector to the gauginos λ, even in modifications of the ISS model with broken R symmetry, as the field M is even under C (provided the ISS-modification does not break C). We chose to assign negative C-parity to the chiral adjoint Φ making the Dirac gaugino mass C-even. This also requires the relative minus sign between the two couplings in W 1 in (2.6). Naively, one might think that with a different ratio of the two couplings in (2.6) the loop-induced Dirac gaugino mass might be reduced or even made to vanish. Take, for example, the extreme case of a positive relative sign between the two terms in W 1 . Then one might argue that the Dirac gaugino mass should vanish: indeed, in this case, we could modify our definition of C so that Φ was even, thus forbidding the loop-induced Dirac gaugino mass term (2.20). However, in this case the diagram of Figure 1 would generate a positive-cosine effective potential for ξ, instead of (2.17), leading to spontaneous C symmetry breaking, and giving rise to the same absolute value of the loop-generated Dirac gaugino mass. 4 However, a choice of C with even Φ, or the absence of any symmetry, would allow for the generation of a tadpole for Φ Y -the gauge singlet hypercharge "adjoint." Such tadpoles are known to destabilize the hierarchy, see e.g. [23]. Having Φ Y odd under an unbroken discrete symmetry eliminates this tadpole, at least the supersymmetrybreaking/messenger sector contribution. Also, this parity can be used to forbid kinetic mixing of the SUSY-breaking spurion with the hypercharge gauge field strength, which could lead to large tachyonic scalar masses. C-violation in the SM may introduce other contributions that will involve loops of quarks and leptons and will be suppressed by products of SM gauge and Yukawa couplings. In what follows we will assume that these contributions are small and can be ignored. This is similar to the standard "messenger parity" that goes along with gauge mediation [24][25][26][27] except Φ is also charged under the parity. The introduction of yet another zero-R-charge adjoint, M ′ , of even C-parity, is necessitated by the requirement to give the adjoint M a mass. This is because in the absence of R symmetry breaking the fermionic components of M are forbidden from obtaining masses due to loops, as is usually expected in a model where R is broken. Finally, take note that G SM ⊂ SU(5) V , and therefore the appearance of these new messengers will have a strong effect on the Standard Model running couplings. In particular, all the couplings lose asymptotic freedom and will develop Landau poles. For typical choices of parameters used below, these Landau poles occur relatively close to the messenger mass scale. Soft terms in the visible sector Now we proceed to the calculation of the soft terms in the visible sector. To begin, we note that there are two main sources of visible sector soft masses in our model: 1. Ultraviolet (UV) contributions due to higher-dimensional operators. Typical in models with direct mediation of supersymmetry breaking, all couplings in the SM lose asymptotic freedom. In our model, the scales of the SM Landau poles are not too far above the messenger scale M mess . As usual, the UV contributions can not be calculated in the low-energy theory. We estimate the scale suppressing the higher dimensional operators and their contribution to the SM soft parameters in Section 3.1 using naive dimensional analysis (NDA). The largest UV contributions are to soft scalar masses, which are expected to be flavor-nondiagonal, and to µ and B µ terms. UV contributions to gaugino masses are suppressed, similar to the well-known pre-anomaly mediation gaugino mass problem of supergravity hidden-sector models. 2. Infrared contributions to the soft parameters arise due to loops of the particles in the direct-mediation sector and are calculable in the low-energy theory. Messenger loops generate Dirac gaugino masses and flavor-diagonal soft scalar masses. The IR contributions to the soft parameters are a loop factor below M mess and are calculated in Section 3.2. There is an interplay between these two types of soft masses in our model. As we discuss in Section 4.1, the scale suppressing the UV contributions to the soft parameters is about a loop factor above M mess . Thus the loop-suppressed IR contributions are typically similar to that due to the higher-dimensional operators. This allows us, at the cost of moderate cancellations of the various contributions in the scalar sector (see Section 4.3) to realize the scenario proposed in [1], where Dirac gauginos heavier than the scalars suppress the flavor-changing neutral currents due to non-degenerate squarks. Estimating the UV contributions We begin by discussing the typical size of UV contributions. From eqn. (2.8), the F -term supersymmetry breaking spurion of R-charge 2 is: Using this spurion, many R-symmetric higher-dimensional operators that communicate supersymmetry breaking to the SM can be written down. They are all suppressed by some high scale Λ, the value of which will be discussed later, in Section 4.1. These UV-operator induced soft mass contributions are of order M U V , defined as: where for future use we chose to rewrite M U V in terms of the messenger scale M 2 mess and the dimensionless parameters of eqn. (2.11)-(2.12). Λ is the scale at which these operators are generated and is a model-dependent parameter. However, before we study the operators that are generated at this scale, a few words can be said about its size. One possibility is that Λ ∼ M P : this is the usual expectation from gauge + gravity mediation, where any "UV operators" are generated by new physics at the Planck scale and are irrelevant. It solves the flavor problem trivially, since all flavor-changing operators are Planck suppressed; however it assumes that all physics below the Planck scale is flavor-conserving, which is a strong assumption. As it does nothing to realize the features of the MRSSM, we do not consider this possibility further here. The other extreme is that Λ is related to Λ 3 , the QCD Landau pole, where presumably there is a new dual description that takes over. It is quite reasonable to assume that there are new states in this dual theory that can generate flavor-violating operators. We will discuss more careful estimates of Λ below but as this turns out to be the most constraining possibility we will consider it throughout the paper. We now enumerate the R-symmetric higher-dimensional operators that can be written down. Dirac gaugino masses m 1/2 can be generated by the "supersoft" operator [21]: Similarly, soft scalar masses, m 0 ij , generically flavor non-diagonal, for the SM fields (say, quark superfields Q) are given by: where c ij is a naively flavor-anarchic matrix with O(1) entries. We note that unless M U V /Λ = O(1), Dirac gaugino masses due to higher dimensional operators are suppressed 5 compared to the soft scalar mass. In addition, the smallness of this operator means that we can ignore supersoft contributions to the scalar masses [21]. As we will see below, the problem of too-small masses due to higher-dimensional operators will be addressed by direct gauge mediation in this model, along with an estimate of the relevant cutoff scale. Next, we recall that in the R-symmetric MSSM the usual µ-term is forbidden by R-symmetry and that there are, instead, two µ-terms, µ u H u R u and µ d H d R d , where R u,d are two new R-charge 2 Higgs doublets. The µ u,d terms, as opposed to the MSSM, preserve a Peccei-Quinn (PQ) symmetry, which forbids B µ but not µ d , µ u (H d,u can be taken to have PQ charge +2, R u,d charge −2, and the quark and lepton superfields charge −1). This symmetry implies that, unlike the MSSM, µ u/d and B µ originate from different operators. The B µ term B µ h u h d is, however, allowed by R symmetry. B µ is generated by an R-preserving Giudice-Masiero type operator: which yields B µ similar to the soft scalar mass (3.4). The µ u,d -terms are instead generated by R-preserving operators 6 of the form: In addition, there is an operator that is not forbidden by any symmetry allowing (3.5), is renormalizable, naively expected to be unsuppressed and generating an unacceptably large B µ term: However, one can put forward arguments in defense of ignoring (3.7). The only difference between the desirable (3.5) and the undesirable (3.7) (as written), is that the former vanishes as Λ → ∞ while the latter does not. Now, the scale Λ is expected to be proportional to the SM Landau pole. Thus all UV-suppressed operators coupling the SM to the supersymmetry-breaking sector that we wrote so far-except (3.7)-vanish as one takes the SM gauge couplings to zero, since the Landau pole scale goes to infinity in this limit. One might adopt a broad definition of "gauge mediation" by requiring that all couplings of SM to supersymmetry-breaking sector fields vanish as one takes the SM gauge coupling to zero (and, in our model, the coupling y of W 1 , which may be related by a high-scale N = 2 supersymmetry to the gauge coupling). Clearly, imposing this criterion amounts to an assumption on the nature of the unknown UV theory: in particular, it should have an accidental PQ symmetry which forbids (3.7) but is broken by higher-dimensional operators such as (3.5). In the absence of an explicit dual, it is hard to be more precise; in practical terms, in what follows we will set the coefficient of (3.7) to zero and appeal to technical naturalness in supersymmetry. The scalars in the adjoint chiral multiplets Φ (of zero R-charge) will also obtain soft masses of order M U V from Kähler potential terms, such as: We can also write a large superpotential "B term" for Φ but chose not to for the same reasons as avoiding (3.7). Finally, as explained in Section 2.3, to avoid massless SM adjoint fermions, we introduced (see Table 1) another SU(5) V adjoint, M ′ , of zero R-charge. The R-preserving operator: gives rise to a Dirac mass for M and M ′ of the same order as the soft scalar mass (3.4). Calculating the IR contributions We now consider the calculable IR contributions to the soft mass parameters. There is a 1-loop contribution (similar graphs were considered in [28]) to the Dirac gaugino mass from graphs involving the ϕ (φ) andN (N) messengers, shown in Figure 2, as well as the usual two-loop gauge mediated contributions to the scalar masses. We now proceed to calculate these soft masses. The diagram of Figure 2 involves an R preserving fermion mass insertion and a scalar with a SUSY-breaking mass and generates a Dirac gaugino mass. Using the values for our masses and couplings of Section 2.2, we find that the loop-induced Dirac gaugino masses can be written as: Soft masses in ISS-models where: where z is defined in (2.11) and measures the off-diagonal supersymmetry breaking mass splitting in the scalar mass matrix (2.9). Notice the dependence of the gaugino mass on cos( ξ /v). Since (see discussion in Section 2.3 and the Appendix) ξ = 0 this factor is just 1. In principle the SU(3), SU(2) and U(1) pieces of the SU(5) V may have different κ and y coefficients. However, for simplicity, we take the 3-2-1 pieces to all be the same; breaking this would effect the relative size of the gauginos and sfermions associated with each SM group. The sfermions acquire a gauge-mediated mass from loops involving the messengers ϕ,φ, but not N,N since they do not have supersymmetry-breaking masses. Following [29], this contribution can be calculated. Thus, the contribution from gauge group a to a sfermion mass squared is: for SU(N) and 3 5 Y 2 for U(1) Y and: We note that the contribution of the R-symmetric messenger sector to soft scalar masses (3.12) is the same as that of one messenger multiplet in usual gauge mediation. The function F (z) from (3.12), with our parameter z identified with F/λS 2 of usual gauge mediation, is the same appearing in, e.g., [29]. The Dirac gaugino mass (3.10), however, is governed by a different function of z than in the case of Majorana mass. This qualitative difference arises because the Dirac gaugino mass requires the presence of an R-preserving chirality flip in the loop. This R-symmetric chirality flip does not appear in the two-loop diagrams generating the scalar mass, which are thus identical to those in one-flavor gauge mediation-the messenger scalars ϕ andφ, which have a supersymmetry-breaking spectrum contribute to the scalar masses, while N andN, which are supersymmetric, do not. In addition note that |R(z → 0)| → z 2 , unlike usual gauge mediation where m 1/2 ∼ z. This is easy to understand since due to R-charges the Dirac mass operator (3.3) needs two insertions of the spurion, ω f 2 , unlike a Majorana mass that needs just one insertion. This qualitative difference leads to the general fact that in R-symmetric gauge mediation the gaugino mass is typically lighter than the scalar mass, in contrast to usual gauge mediation, where the m 1/2 : m 0 ratio is larger than unity, see [29]. The ratio of gaugino to sfermion mass in R-symmetric gauge mediation is: . The ratio R/ √ F , as Figure 3 shows, is strictly less than 1: for z = 0.99, |R/ √ F | = .64. Thus, in order to solve the supersymmetry flavor puzzle along the lines of [1], which requires large gaugino to squark mass ratios, within an ISS supersymmetry-breakingcum-mediation sector, we must have a large Yukawa coupling y (near the boundary allowed by perturbativity, as we will discuss in Section 4.3). Finally, the scalar adjoint fields in the Φ supermultiplets also get real and holomorphic masses from the messenger loops: 7 These are given by 17) and the z dependence in (3.16) is the same as in the gaugino mass (3.11). These masses are the same order, but it can be seen that |B φ | < m 2 φ for any value of z, so the gauge symmetry is protected. Also notice that B φ is strictly negative, which means that the scalar will always be lighter than the pseudoscalar. This is the reverse of ordinary supersoft mediation [21]. Notice that since this a one-loop scalar mass, it is enhanced compared to the gaugino mass: where α is the fine structure constant of the relevant gauge group. Thus we generally expect the adjoint scalars to be roughly an order of magnitude heavier than the gauginos, although there could be a sizable cancellation between the real and holomorphic mass. In addition, there could be cancellations with the UV operators that we defined in the previous section (3.8). Generalized R-symmetric gauge mediation In this section, we introduce a model of generalized R-symmetric gauge mediation. Inspired by previous discussions of generalized gauge mediation, see e.g. [29], we implement supersymmetry breaking in terms of an R-charge 2 spurion Ξ ∼ θ 2 f 2 , instead of a dynamical supersymmetry-breaking sector. From the ISS model considered in the previous sections, we learned that only the fields ϕ,φ, and N,N of ISS play a role in the mediation of supersymmetry breaking to leading order in the loop expansion. Furthermore, as we explained in Section 2.3, this is the minimal set of messenger fields required to achieve R-symmetric gauge mediation. Thus, in our generalized model, we will keep only these fields and consider a messenger sector consisting of N mess copies: Here Ξ is the supersymmetry breaking spurion (3.1), M mess is a rigid messenger mass scale; the R-assignments of the multiple copies of messengers are the same as their namesakes of Table 1, as is their C-parity. The messenger sector (3.19) gives rise, through the same set of one and two loop diagrams as the ones discussed in the previous section, to N mess × the gaugino mass contribution of (3.10) and N mess × the scalar mass squared contribution of (3.12), where we reinterpret z = f /M mess . Thus the ratio of loop-induced gaugino to scalar mass of eqn. (3.14) is enhanced by a factor of √ N mess . The enhancement of the Dirac gaugino mass by √ N mess in the generalized model relaxes (some of, see Section 4.3) the need of having a large Yukawa coupling y. In addition, the absence of the SM adjoints M, M ′ from (3.19) pushes the SM Landau pole up: we note that the α s beta function of the MRSSM vanishes already above the scale of the Dirac gaugino mass and thus adding any colored messenger inevitably leads to a Landau pole. We will have to say more about this below. To end this section, we note that in light of its phenomenogical desirability, it would be of some interest to have a UV completion of the generalized messenger model of (3.19), ideally including both the dynamics of supersymmetry breaking and generating the messenger mass scale M mess without introducing the extra adjoint baggage of the ISS model; we leave this for future work. 8 How high can Λ be? It is well known that in order to avoid constraints from K −K mixing, the dimension six operator must have a cutoff Λ Q ∼ 10 3 TeV. Thus we need to chose parameters such that our cutoff is no smaller than this. To understand how large the scale suppressing the UV contributions (Λ) can be, we must consider the location of the Landau pole. Consider the beta functions of G SM : Now we must consider how the spectrum behaves, since the running will be sensitive to the fermionic and bosonic mass thresholds of the various multiplets. We solve the one-loop renormalization group equations including the various contributions as we pass their mass threshold, but we do not include finite threshold effects. The presence of a large number of fields charged under the SM means that the Landau pole of SU(3) typically occurs at a relatively low scale, resulting in potentially sizeable UV-induced soft masses (3.2). However, the Dirac gaugino mass will still be too small if the UV-generated operator (3.3) is the only source of its mass. For the Yukawa couplings in (2.6) of order one the gauginos have phenomenologically viable masses but the gluino will still be somewhat lighter than the squarks, see (3.14). Without a large value for y it is not possible to realize the scenario of [1]. For larger values of y, sufficient to allow for large squark mixing and the interesting flavor physics of the MRSSM, there will be a Landau pole for some Yukawas below the strong coupling scale of SU(3). The generalized model of Section 3.2 alleviates some of these issues by removing some of the adjoints, raising the Landau pole, and increasing the number of messenger families, which lowers the Landau pole but also raises the gaugino:scalar mass ratio. Once the location of the SU(3) Landau pole Λ 3 has been determined we may estimate the size of the UV contributions. If all gauge and Yukawa couplings became strong at the same scale one would expect that the scale Λ of Section 3.1 would be related to the strong coupling scale Λ 3 by, Λ 3 ∼ 4πΛ. However, not all couplings become strong at the same scale and the operators involve Ξ, which is not charged under SU(3). Such operators should have a suppression from the perturbative coupling which is weak at that scale, weakening some of the constraints that we will find below. Of course, we know that while there should be a suppression, it is hard to estimate: above Λ 3 , in the absence of an explicit dual description, we have no idea how the other couplings run (as we have a duality cascade, where after dualizing SU(3), the other gauge content will change) and where the other Landau poles now are. For the purposes of estimating the UV contributions we will therefore make the simplifying but conservative assumption that Λ = Λ 3 /4π, which potentially overestimates the size of the UV contributions, especially in the electroweak sector. Sample Spectra In this section we will consider three examples of spectra: the full ISS model with perturbative Yukawas, the full ISS model with large y (and consequently large gaugino masses) and the generalised model. In all cases we consider z = 0.99. All squark and slepton masses in the tables below are from the IR-direct gauge mediation contribution; we will discuss the UV mass contributions in the next section. Spectrum at small Yukawa We consider a case where the Yukawas of (2.5) and (2.6) remain perturbative up to the Landau pole of SU(3); so we consider here the case of y = 2, λ = 1 and all other Yukawas are O(1). As discussed below (3.14), this results in too light a gluino mass and this situation does not allow for large squark mixing. We will assume that the UV contributions to the scalar masses have small coefficients so that the flavor diagonal, IR contributions (3.12) dominate. Solving the RGEs we find the spectrum, at the messenger scale, shown in Table 2: the Landau pole occurs at Λ 3 ∼ 8 × 10 3 TeV and α 3 (M mess ) ∼ 0.12. Tables 3,4, only the IR contributions to squark and slepton masses are shown. Spectrum at large Yukawa As discussed in Section 3.2, to get large gaugino masses and so allow large sflavor violation in the MRSSM [1] we need a large Yukawa; here we consider the case of y = 8 and all other Yukawas are O(1). For such a large Yukawa the Yukawa Landau pole is close to the messenger scale. The squark masses are somewhat large, but below we will assume some cancellation between the UV (3.4) and IR (3.12) contributions, allowing for large squark mixingà la [1]: this will require some tuning and we will discuss this in the next section. In this case, we find the spectrum of Table 3 while α 3 (M mess ) ∼ 0.11 and Λ 3 ∼ 10 4 TeV. The Landau poles of the other SM gauge groups are significantly higher, but as we mentioned above, "dualizing" color at Λ 3 would necessarily change that estimate. We emphasize that we do not expect this spectrum to be an accurate sample of parameter space with such a large Yukawa coupling; rather, we can see from this exercise that the only hope we have to realize an MRSSM scenario in the IR masses is to go to strong coupling, which would necessitate a more detailed analysis, including the effects of higher loops. Spectrum in the generalized model In the models of generalized R-symmetric gauge mediation of (3.19) increasing the number of messenger families, N mess , increases the ratio of the gaugino mass to the scalar mass. Furthermore, the SM Landau pole is postponed because of the absence of the SU(5) V adjoints M, M ′ , which allows us to take a lower messenger scale. Performing the same analysis as above, we find that for y = 3, N mess = 6 and M mess = 80 TeV we have α s (M mess ) = 0.08 and Λ 3 = 5×10 4 TeV. The corresponding spectrum is shown in Table 4. Because of the large number of messengers the Yukawa has a Landau pole below Λ 3 . Estimation of tuning Recall that there are two contributions to the soft squark masses: one from the direct mediation, which is fixed by the calculation in Section 3.2, and the other from the UV operators in (3.4). The latter comes with a coefficient that we will call c D for the flavor-diagonal terms, and c OD for the flavor-off-diagonal terms. Ideally we would like these coefficients to be O(1), and to solve the flavor puzzle we would also want c D ∼ c OD . This means that there are two potential sources of tuning: one coming from the UV-IR cancellation of the diagonal masses, and one coming from the smallness of the flavor-violating terms relative to the flavor-diagonal terms. We will discuss each of these in turn. First of all, some general comments can be made about the first kind of tuning between UV and IR contributions. Recall that we made the conservative assumption that the scale of the UV operators was proportional to the QCD Landau pole Λ 3 i.e. Λ = Λ 3 /4π. This means that M U V ∼ M 2 mess /λΛ is typically quite large unless we wish to make λ big, which would introduce another Landau pole. This mass scale is typically O(10) TeV in the ISS models, and smaller for the generalized models, as can be seen from the tables in the previous section. If the final scalar mass is m 0 , we have with m 2 IR given by (3.12). If m 0 < m IR ∼ 1 TeV, this means that |c D | ∼ 10 −2 in the ISS models, and |c D | ∼ 1 in the generalized models. This is smaller than hoped for in the ISS case, although it does very well in the generalized model; but it should be noted that it depended on the cutoff being so low, and our hopes to avoid another Landau pole in λ. If we are willing to accept strong coupling, or the added assumption that the generation of flavor-changing operators is postponed to a higher scale (the SU(2) Landau pole, for instance), then this tuning can be weakened. To analyze the second form of tuning, if δ is the ratio of the flavor-changing mass squared term over m 2 0 , we have Given Equations (4.3)-(4.4), we can immediately write down a formula that quantifies the flavor tuning: Notice that this expression is independent of M U V . Typical allowed values of δ are of order 0.1 or less [2], given m 1/2 /m 0 of 5-10. We saw from (3.14) that m IR is typically larger or of order the gaugino mass, so we immediately see from (4.5) that this model will be somewhat tuned. For example, if we demand a 10% tuning, we require m 0 = m IR / √ 2, which is very hard to do while maintaining the gaugino:squark ratio. Lowering our standard to a 1% tuning, we require m 0 = m IR / √ 11 which is much easier to accomplish. So there is a trade off. In Table 5 we give the flavor tunings for m 0 δ t ISS with Large y 600 GeV 0.05 1.4% General Model 1 TeV 0.07 2.7% Table 5: Size of the flavor tuning for the MRSSM spectra considered above. the two models considered in Tables 3 and 4. The values of δ ≡ δ L = δ R are the maximum values for the given m 0 and gluino mass after QCD corrections to K −K mixing are taken into account [2]. Lifetime of the false vacuum We have concentrated our attention on the physics around the SUSY breaking vacuum of ISS but this minimum of the potential is metastable. The true minimum of the system, whose existence is due to the higher dimension non-perturbatively generated term we ignored in (2.2), has unbroken supersymmetry. The additional operator is due to instanton contributions, where in this section Λ denotes the duality scale, the strong coupling scale of the gauge coupling in the microscopic theory. Once this additional term is included the rank condition can now be satisfied and there is a SUSY preserving minimum at, Because the additional term is irrelevant this SUSY preserving minimum is far from the SUSY breaking minimum. It is this distance that results in the metastable vacuum being very long lived. Transitions from one vacuum to another are initiated by bubble formation, the rate for this process is determined by the action of the 4 dimensional Euclidean bounce action, Γ ∼ f 4 exp (−S 4 ) . In general calculating the bounce action analytically is not possible and it must be determined numerically. For the case of ISS however the potential is well approximated by a square potential for which there are known analytic solutions [32]. The bubble action for our model is given [5,11] by Requiring that the false vacuum lives longer than the age of the universe results in the requirement [33] Λ f > ∼ 3 . (4.10) As seen in Section 4.1 the SU(3) Landau pole, the upper bound on the duality scale, was approximately 100f , so (4.10) can be easily satisfied for the scales discussed earlier. Discussion In conventional models of supersymmetry breaking the dynamics that leads to the breaking of supersymmetry also breaks R-symmetry. When this breaking is communicated to the visible sector it results in R-symmetry violating gaugino masses, B µ and A-terms. There has been much recent interest in the ISS models of supersymmetry breaking for which there exists a metastable supersymmetry breaking vacuum that preserves the R-symmetry. If such models are to be phenomenologically viable the gauginos must acquire a mass. Many variants of ISS have been explored that break the R-symmetry and allow for Majorana gaugino masses. Here we have discussed the alternative possibility that the R-symmetry is preserved and instead the gauginos acquire a Dirac mass. The Dirac gaugino mass and the sfermion masses are communicated to the visible sector through gauge mediation; hence we have a model of R-symmetric Gauge Mediated Supersymmetry Breaking (RGMSB). Because the R-symmetry is preserved the gauginos are Dirac, the A-terms are zero, and the Higgs sector now consists of four Higgs doublets: the field content of the MRSSM. We showed that the dependence of the gaugino mass on the supersymmetry breaking scale differs from that of usual GMSB, but the scalar masses do not. The gaugino mass is lower than in usual gauge mediation. We considered two examples for the R-preserving-supersymmetry-breaking sector: a version of ISS which may allow for direct mediation, and a generalization (an O'Raifeartaigh model) with fewer fields. The necessity of including an adjoint chiral superfield to act as the Dirac partner of the gauginos means that these models have a Landau pole for gauge couplings, the lowest of which is for SU (3). In the case of the ISS model there are many new fields charged under the standard model and this Landau pole is low, typically a few decades above the scale of the messenger masses. For the O'Raifeartaigh model it can be somewhat higher. There are potentially new operators, such as flavor non-diagonal scalar masses, generated at the strong coupling scale. The size of these operators is unknown. If small then the model is an R-symmetric version of gauge mediation, with a spectrum that differs somewhat from that of [29]. However, if large (but not too large) this has all the features of the MRSSM. Making a conservative estimate of the the size of these UV generated operators we found that it is possible to realize the MRSSM scenario of large flavor-violating couplings by using R-symmetric gauge mediation, but only at the expense of introducing fine tuning or strong coupling or both. If these operators were instead smaller than expected, then the spectrum of the MRSSM could be realized, but there would be no source of the large sfermion mixings (allowed because of the R-symmetry) that lead to the interesting flavor signatures. This does not rule out the possibility of the MRSSM, but it does suggest that a better understanding of the UV theory is required in order to decide how natural such a spectrum actually is. diagram is finite. Expanding around the minimum, with vacuum expectation values from Equations (2.15) and (2.16) and η = 0, we find the effective potential for ξ is where µ 2 is the value of the loop in Figure 1: [29] and have the form This leads to a ξ mass: In particular: H(1) = 2π 2 /3, and vanishes for z = 0 (the SUSY limit).
2009-01-06T17:45:36.000Z
2008-09-08T00:00:00.000
{ "year": 2008, "sha1": "1e31b8f783b38550dfbe5fde702355180d91b85f", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1126-6708/2009/01/018/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ff2b952f92889007ed5ebe91229ce84715bad08b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255971050
pes2o/s2orc
v3-fos-license
Receptor-dependent effects of sphingosine-1-phosphate (S1P) in COVID-19: the black side of the moon Severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) infection leads to hyper-inflammation and amplified immune response in severe cases that may progress to cytokine storm and multi-organ injuries like acute respiratory distress syndrome and acute lung injury. In addition to pro-inflammatory cytokines, different mediators are involved in SARS-CoV-2 pathogenesis and infection, such as sphingosine-1-phosphate (S1P). S1P is a bioactive lipid found at a high level in plasma, and it is synthesized from sphingomyelin by the action of sphingosine kinase. It is involved in inflammation, immunity, angiogenesis, vascular permeability, and lymphocyte trafficking through G-protein coupled S1P receptors. Reduction of the circulating S1P level correlates with COVID-19 severity. S1P binding to sphingosine-1-phosphate receptor 1 (S1PR1) elicits endothelial protection and anti-inflammatory effects during SARS-CoV-2 infection, by limiting excessive INF-α response and hindering mitogen-activated protein kinase and nuclear factor kappa B action. However, binding to S1PR2 opposes the effect of S1PR1 with vascular inflammation, endothelial permeability, and dysfunction as the concomitant outcome. This binding also promotes nod-like receptor pyrin 3 (NLRP3) inflammasome activation, causing liver inflammation and fibrogenesis. Thus, higher expression of macrophage S1PR2 contributes to the activation of the NLRP3 inflammasome and the release of pro-inflammatory cytokines. In conclusion, S1PR1 agonists and S1PR2 antagonists might effectively manage COVID-19 and its severe effects. Further studies are recommended to elucidate the potential conflict in the effects of S1P in COVID-19. Introduction Coronavirus virus disease 2019 (COVID-19) is the third global respiratory viral pandemic that was first reported in December 2019, following the Middle East respiratory syndrome coronavirus (MERS-CoV) and severe acute respiratory syndrome coronavirus (SARS-CoV) in 2012 and 2003, respectively [1,2]. COVID-19 is caused by severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2), a single RNA virus from the Betacoronaviridae family [3,4]. This virus exploits different receptors for entry into the host cells, angiotensin-converting enzyme 2 (ACE2) is reported as the main entry receptor. SARS-CoV-2 infection is associated with the development of excessive immune response and hyper-inflammation in severe cases, which may progress to cytokine storm and multi-organ injuries (MOI) like acute respiratory distress syndrome (ARDS) and acute lung injury (ALI) [5][6][7]. In addition to the pro-inflammatory cytokines, different mediators are involved in SARS-CoV-2 pathogenesis and infection, like sphingosine-1-phosphate (S1P) [8,9]. S1P is a bioactive lipid found at a high level in plasma, and it is synthesized from sphingomyelin by the action of sphingosine kinase (Sphk1/2), ceramidase, and sphingomyelinase [10]. S1P is metabolized to an inactive form by S1P lyase (SPL) (Fig. 1). Characteristics of sphingosine-1 phosphate The main sources of S1P are erythrocytes, endothelial cells, and platelets to a lesser extent [10]. S1P activates G-protein coupled S1P receptors (SIPR) 1-5, with the endothelial cells highly expressing S1PR1-3. S1P is involved in inflammation, immunity, angiogenesis, vascular permeability, and lymphocyte trafficking [11]. Pharmacological inhibition of Sphk1/2 halves S1P plasma concentration. Interestingly, Sphk1/2 null mice surprisingly show a higher S1P level since Sphk1/2 participates in the redistribution of S1P between erythrocytes and endothelial cells [10]. S1PR1 is the most common and widely distributed of all the receptors for S1P. Dynamin-2 and clathrin mediate this receptor to activate the phosphatidylinositol 3-kinase (PI3K) pathway, which is essential for maintaining vascular permeability and stabilization [12]. S1PR1 improves innate immunity by activating macrophages and neutrophil migrations, mast and eosinophilic cells, and inhibiting abnormal interferon-alpha (IFN-α) production during viral infections [13]. S1PR1 regulates innate and adaptive immune responses by controlling natural killer (NK) cell trafficking and macrophage polarization [13]. These downstream signalings enhance endothelial protection and anti-inflammatory effects. Emerging evidence from previous and recent studies demonstrates that S1P via S1PR1 causes vasodilation and endothelial protection through a nitric oxide (NO) dependent pathway and modulation of Ca + 2 transports [14,15]. It has been illustrated experimentally that S1P-deficient mice subjected to anaphylaxis exhibit high vascular leakage and mortality. In this condition, the administration of erythrocytes, a main source of S1P, restored endothelial function and integrity in S1P-deficient mice [16]. Therefore, S1PR1 elicits barrier and cardioprotective properties through antiinflammatory activity. On the other hand, S1PR2 in response to S1P opposes the effect of S1PR1. It induces phagocytosis independent of complement activation by inhibiting S1PR1-mediated signaling pathways while inducing vascular permeability and endothelial dysfunction [17]. Thus, the expression balance between S1PR1 and S1PR2 may affect the endothelial response to S1P. A better consideration and understanding of how S1P produces beneficial or harmful effects on disease and health should be related to the receptor types. S1P binding to S1PR2 also antagonizes S1PR1 action via activation of the G12/13-Rho-Rho kinase (ROCK) pathway, which induces endothelial permeability [17]. Endothelial dysfunction is developed during acute inflammation with the progression of adhesion molecules. S1PR2 is instrumental in vascular inflammation and inhibition of S1PR1, promoting the development of endothelial dysfunction, vascular inflammation, and ischemic-reperfusion injury [18]. Role of S1P pathway in viral infections The SphK1/2/S1P axis has a potential role in the generation and release of pro-inflammatory cytokines and vascular integrity. S1P plays an integral role in regulating viral replication, adaptive/innate immune response, and hyperinflammation [19]. Activation of Sphk1/2 accelerates infections Fig. 1 Pathway of sphingosine-1-phosphate (S1P). Sphingosine is converted by sphingosine kinase (Sphk1/2) to S1P, which leads to a cellular response through S1P receptors (S1PR1-5). Sphingosine lyase (S1PL) metabolises S1P to inactive metabolites such as respiratory syncytial virus (RSV) and cytomegalovirus (CMV) infections, while its inhibition truncates viral replication of measles virus (MV) and influenza A virus [20]. The SphK1/S1P axis facilitates viral entry and improves viral replication. S1P acts as a co-receptor modulating viral entry, intracellular replication, and affects antiviral immune response [19]. It has been demonstrated that cells over-expressing SphK1 are highly susceptible to different viral infections compared to normal cells [21]. Recently, SphK1 has been shown to be co-localized with viral RNA, so inhibition of Sphk1/2 may impair and limit viral replication [21]. These verdicts suggest that the SphK1/2/S1P axis has an important role in viral replication and that inhibitors of this intracellular axis may restrict viral load by inhibiting viral replication. Targeting the SphK1/2/S1P axis could be an effective strategy against viral infections and associated hyperinflammation and endothelial dysfunction. Role of S1P pathway in COVID-19 The SphK1/2/S1P axis is also involved in promoting the replication of SARS-CoV-2 and the release of pro-inflammatory cytokines [22]. Pan et al. [25] suggest that the SphK1/2/ S1P pathway promotes the invasion of SARS-CoV-2 into the central nervous system (CNS) through the olfactory pathway by expressing S1PR1. It has been observed that low levels of plasma S1P were correlated with COVID-19 severity and can be regarded as a biomarker of disease severity [26]. In their prospective case-control study involving 111 COVID-19 patients and 47 healthy controls, Marfia et al. found that the reduction in circulating S1P level is correlated with COVID-19 severity [27]. The underlying mechanisms were due to either injury to endothelial cells, erythrocytes, and platelets, which are major sources of circulating S1P, or reduction of S1P transporters like high-density lipoprotein (HDL) and albumin [27]. Moreover, S1P is increased within the erythrocytes by upregulation of Sphk1/2 in COVID-19 patients as an adaptive response to maintain endothelial integrity and prevent tissue hypoxia [28,29]. However, hemolytic anemia and abnormal erythrocrine function in severe COVID-19 may affect the circulating S1P [30,31]. S1P is rapidly synthesized from endothelial cells and hemopoietic cells to compensate any reduction in plasma S1P [11]. Though, in severe COVID-19 due to the suppression of hemopoietic tissues by high circulating IL-6, serum S1P is reduced and correlates with disease severity [27]. Despite these robust findings, these observations did not discuss the receptor-dependent effects of S1P and further insight into the resultant benefits or detriments. Naz and Arish, 2020 reported that S1P limits excessive IFN-α response in SARS-CoV-2 infection by down-streaming of nuclear factor kappa B (NF-κB) and mitogen-activated protein kinase (MAPK) [32]. Thus, activation of S1PR1 and inhibition of S1PR2 could be beneficial in COVID-19 sufferers, as S1P analogue(s) might be helpful in treating COVID-19. S1PR2 contributes to TNF-α-induced pro-inflammatory response and NF-κB activation, developing endothelial permeability and dysfunction [33][34][35]. Hou and colleagues revealed that S1P through S1PR2 promotes nod-like receptor pyrin 3 (NLRP3) inflammasome activation, causing liver inflammation and fibrogenesis [36]. Thus, higher expression of S1PR2 by macrophages contributes to the activation of the NLRP3 inflammasome and pro-inflammatory cytokine release. It has been shown that NLRP3 inflammasome and pro-inflammatory cytokines are highly activated in COVID-19 and are linked with the development of cytokine storm and ALI/ARDS [37][38][39]. Different studies revealed that the SphK-S1P-S1PR axis plays a role in accelerating inflammation and growth of endometriotic cells by increasing the expression of IL-6 and other pro-inflammatory cytokines [40]. As well, 1P has been shown to regulate cyclooxygenase-2 (COX-2)/prostaglandin E2 (PGE2) expression and IL-6 secretion in various respiratory diseases [41]. However, the mechanisms underlying S1P-induced COX-2 expression and PGE2 production in human tracheal smooth muscle cells (HTSMCs) remain unclear [42]. However, S1P-induced COX-2 expression and PGE2/IL-6 generation was mediated through S1PR2 [42]. S1P inhibits ALI via S1PR1, whereas S1PR2 causes ALI and pulmonary edema [43]. Zhu et al. found that ApoM produces a protective effect against ALI through S1P/S1PR1 [44]. These findings suggest receptor-dependent effects of S1P in inducing ALI in COVID-19. Moreover, S1PR2 is induced during hypoxia [45], a cardinal feature of patients with severe COVID-19 [46]. In their research, Michaud et al. observed that S1P is regarded as a novel non-hypoxic stimulus that induces hypoxia-inducible factor (HIF-1) [47]. It has been proposed that high HIF-1 may protect against COVID-19 severity by decreasing ferritin and modulating ACE2 expression [48,49]. Hence, the protective role of S1P in COVID-19 is exerted through S1PR1 binding and responsive downstream effect. S1P via activation of S1PR2 inhibits the egress of lymphocytes from lymphoid organs, which causes lymphopenia, a condition linked with COVID-19 severity [50,51]. Fingolimod, a modulator of S1PR, can sequester lymphocytes in the lymph nodes, preventing them from contributing to the development of autoimmune disorders such as multiple sclerosis [52]. This medication is an analogue of sphingosine phosphorylated by Sphk to yield phospho-fingolimod. This intermediate, upon binding to SIPR1, induces the internalization of S1PR1 and sequestration of lymphocytes [53]. Various studies have also reported a rise in mice and humans' blood pressure following long-term treatment with fingolimod [54,55]. Furthermore, higher expression of S1PR2 in the lung may lead to pulmonary vasoconstriction and the development of pulmonary hypertension [56], which are hallmarks of severe COVID-19 [57]. Besides, expression of S1PR2 causes disruption of endothelial integrity and the development of endothelial dysfunction by reducing endothelial nitric oxide synthase (eNOS) and NO availability, triggering the release of pro-inflammatory cytokines [58]. Bonaventura et al. observed that endothelial dysfunction is a potential cause of immunothrombosis and ALI/ ARDS progression [59,60]. It has been reported that both S1PR1 and S1PR2 promote platelet activation and thrombin formation [61]. On that account, S1P-induced endothelial dysfunction and coagulopathy could increase COVID-19 severity. However, glucocorticoid anti-inflammatory effects are partially mediated by the activation of Sphk1 and activation of the S1P/S1PR2 complex [62], making them effective in COVID-19 by inhibiting IL-6-induced S1P release [27]. During inflammation, tumor necrosis factor-alpha (TNFα) activates Sphk1/2 in the endothelial cells, increasing S1P synthesis, and expression of S1PR2 [33]. This effect triggers the development of endothelial dysfunction and immunothrombosis in COVID-19 through an S1PR2dependent pathway [33]. Targeting of Sphk1/2 by specific inhibitors may inhibit S1PR2-mediated hyperinflammation and endothelial dysfunction [22]. However, inhibiting Sphk1/2 may have a negative impact on SARS-CoV-2 infection because using the S1P analogue FTY720/fingolimod reduces hyperinflammation and limits immune response exaggeration during SARS-CoV-2 infection [63]. Similarly, the sphingolipid derivative, ceramaid-1 phosphate, exhibits immunoregulatory and antiviral effects by which it enhances antigen presentation and autophagy with activation of T cell response that may be beneficial in the case of SARS-CoV-2 infection [63]. Moreover, SARS-CoV-2-induced up-regulation of the renin-angiotensin system (RAS) promotes the progression and elevation of circulating angiotensin 2 (Ang-II), promoting the release of pro-inflammatory cytokines with the subsequent development of endothelial dysfunction, vascular inflammation, and ALI/ARDS [60,64,65]. It is worth noting that S1P via S1PR1 can cause cardiac remodeling and fibrosis by inducing the release of Ang-II and IL-6 [66]. Meissner and his colleagues, 2017 illustrated that S1P is a kingpin in the pathogenesis of Ang-II-induced hypertension [67]. These findings suggest that S1P could be a detrimental factor in increasing cardiovascular instability in COVID-19. Furthermore, S1P is involved in SARS-CoV-2 pathogenesis and infection through transmembrane protease serine 2 (TMPRSS2)/ACE2 axis induction. In addition, activation of protective ACE2 is associated with the expression of S1P and S1PR [63,68] (Fig. 2). Modulation of the S1P pathway The intonation of S1P receptors through agonists and antagonists is a common clinical intervention to achieve clinical utility. A common example is FTY720, an S1PR1 agonist that elicits an immunosuppressive effect through the inhibition of lymphocyte recirculation [69]. FTY20-P, the phosphorylated derivative of FTY20, binds to S1PR1 and acts as a functional agonist. FTY20-P is more potent than S1P in inducing the degradation and internalization of S1PR1. It also possesses anti-angiogenic and immunosuppressive properties, making it relevant for different inflammatory and autoimmune disorders [70]. S1P1 agonists attenuate Fig. 2 Role of S1P in SARS-CoV-2 infection. S1P through S1PR1 induces transmembrane protease serine 2 (TMPRSS2), which activates the expression of ACE2, inducing the synthesis of S1P and activating sphingosine kinase1/2 (Sphk1/2) the expression and release of pro-inflammatory cytokines including IL-6 during pathogenic influenza virus infection [71]. Similarly, ponesimod is a potent and orally active S1PR1 agonist, effective against lymphocyte-mediated inflammation, and used to manage autoimmune diseases [72]. According to Burg et al. in experimental mice with immune complex-induced endothelial dysfunction. ApoM-Fc, an S1PR1 agonist, attenuates the activation of polymorphonuclear neutrophils-induced endothelial dysfunction, suggesting S1PR1 agonists limit neutrophil escape from capillaries and enhance endothelial cell barriers, concomitantly preventing immune-mediated vascular injury [73]. Another S1PR1 agonist of interest is CYM-5442. When used in combination with the antiviral oseltamivir, it greatly protects against H1N1-induced ALI through the inhibition of activated MAPK and NF-κB signaling pathways [74]. In this regard, S1PR1 agonists may be useful in treating COVID-19 by dampening the exaggerated immune response and endothelial dysfunction that are hallmarks of SARS-CoV-2 infection [34]. Likewise, the S1PR1 agonist fingolimod could be a potential agent against SARS-CoV-2 infection-induced ALI/ARDS by inhibiting pulmonary vascular endothelial dysfunction and inflammatory infiltrate [74]. On the other hand, blocking of inflammatory S1PR2 by selective antagonists may reduce complement activation, vascular permeability, endothelial dysfunction, TNF-αinduced pro-inflammatory response, and NF-κB activation [18,33]. JTE-013 is the only S1PR2 antagonist with well-understood and recognized pharmacology, such as low potency and selectivity [75]. Recently, other S1PR2 antagonists such as CYM-5520 and CYM-5578 have been identified, but there is a dearth of information in terms of characterization and understanding of their biological mechanisms. S1PR2 antagonists could be of value in reducing pulmonary hypertension and lung fibrosis. They may also attenuate endothelial dysfunction and restore vascular endothelial barriers [76]. As a consequence of their anti-inflammatory and endothelial cell-protective effects, S1PR2 antagonists might be of great value in managing COVID-19. These observations highlight that S1P has a dual role in different viral infections, including SARS-CoV-2. Despite the different implications for the role of the SphK1/2/S1P axis in the enhancement of viral infections, S1P exerts a protective role through an S1PR1-dependent pathway against the propagation of endothelial dysfunction and the release of proinflammatory cytokines. However, S1P via an S1PR2-dependent pathway provokes an inflammatory reaction and the induction of endothelial permeability. Therefore, S1PR1 agonists and S1PR2 antagonists could be a novel therapeutic strategy against SARS-CoV-2. In this sense, this brief review, unlike other studies that focused on the level of SIP in COVID-19, provides a new idea regarding the receptor-dependent effect of SIP. S1PR1 agonists and S1PR2 antagonists may offer a novel approach to COVID-19 management by modulating the exaggerated immuno-inflammatory response against SARS-CoV-2 infection, as well as the associated endothelial dysfunction and triggered inflammatory signaling pathways (Fig 3). Fig. 3 Role of S1P receptors in COVID-19. S1P via the activation of S1P receptor 1 (S1PR1), activates phosphoinositol 3 kinase (PI3K), which maintains vascular permeability and inhibits the development of endothelial dysfunction (ED). The activation of S1PR1 stimulates interferon alpha (INF-α) which inhibits viral infection and decreases viral load. This activation inhibits the development of acute lung injury (ALI) and acute respiratory distress syndrome (ARDS). The activation of S1PR2 induces the release of pro-inflammatory cytokines (PIC) and the development of hyperinflammation. S1PR2 also triggers vascular permeability with the development of ED. Thus, the activation of S1PR2 increases the risk of development of ALI/ARDS Conclusion This review demonstrates the S1P's potential role in COVID-19 with reference to its receptor-dependent effects. These observations give a controversial picture of the potential role of S1P in COVID-19 due to poor evaluation of receptor-specific effects. In contrast to the adverse consequences of the S1P-S1PR2 binding, which include endothelial dysfunction and the production of coagulopathy, the S1P-S1PR1 binding has protective effects. Therefore, S1PR1 agonists and S1PR2 antagonists, regardless of S1P level, might be a novel therapeutic approach for managing COVID-19 and its severe effects. Further studies are recommended to find agents with dual S1PR1 agonists /S1PR2 antagonists' activity and elucidate their effects on COVID-19. Elucidating the potential conflict in the effects of S1P in COVID-19 is highly recommended.
2023-01-19T20:37:36.835Z
2023-01-18T00:00:00.000
{ "year": 2023, "sha1": "d37574bfa5579948d37a958de22dba29f3385cf3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2fdbd94d284060231cb4e4b215207b156c378cc4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52041814
pes2o/s2orc
v3-fos-license
What Affects Attendance and Engagement in a Parenting Program in South Africa? Parenting programs are a promising approach to improving family well-being. For families to benefit, programs need to be able to engage families actively in the interventions. Studies in high-income countries show varying results regarding whether more disadvantaged families are equally engaged in parenting interventions. In low- and middle-income countries (LMICs), almost nothing is known about the patterns of participation in parent training. This paper examines group session attendance and engagement data from 270 high-risk families enrolled in the intervention arm of a cluster-randomized controlled trial in South Africa. The trial evaluated a 14-week parenting intervention aiming to improve parenting and reduce maltreatment by caregivers. The intervention was delivered in 20 groups, one per study cluster, with 8 to 16 families each. Overall, caregivers attended 50% of group sessions and children, 64%. Using linear multilevel models with Kenward-Roger correction, we examined child and caregiver baseline characteristics as predictors of their attendance and engagement in the group sessions. Variables examined as predictors included measures of economic, educational, and social and health barriers and resources, as well as family problems and sociodemographic characteristics. Overall, the study yielded no evidence that the level of stressors, such as poverty, was related to attendance and engagement. Notably, children from overcrowded households attended on average 1.2 more sessions than their peers. Our findings suggest it is possible to engage highly disadvantaged families that face multiple challenges in parenting interventions in LMICs. However, some barriers such as scheduling, and alcohol and substance use, remain relevant. Electronic supplementary material The online version of this article (10.1007/s11121-018-0941-2) contains supplementary material, which is available to authorized users. prominent in the global agenda (WHO 2016). The new Sustainable Development Goals include ending all forms of violence against children. Responsive and consistent parenting has been identified as a protective factor that promotes children's health and development in low-income and highstress contexts (Murphy et al. 2017). Therefore, there is a need to build knowledge on promoting effective parenting in LMICs. Parenting interventions are a promising approach to improve parenting, and to reduce and prevent child maltreatment (e.g., Mikton and Butchart 2009;Barlow et al. 2013). They can also target other outcomes, such as parent mental health, child externalizing behavior, and substance use (e.g., Chen and Chan 2016). These interventions include a range of designs, and are usually delivered individually or in groups over several weeks, based on a treatment manual. Group parenting interventions rely on participants attending sessions and engaging with the content. Several studies demonstrated that the extent of participation is important in determining benefits gained from a parenting intervention. The levels of participant attendance and engagement in sessions were linked to intervention outcomes in several studies in the United States (US) (Baydar et al. 2003;Gross et al. 2009). Other US studies have found that engagement in sessions-but not attendance-predicted outcomes (Garvey et al. 2006;Nix et al. 2009). A recent Dutch study found that more sessions attended by parents predicted better parenting behavior but not child behavior (Weeland et al. 2017). A key motivation for the current study, informed by frameworks such as the family stress theory, is that families with multiple stressors are at a higher risk for strained family relationships (Smith et al. 2016). Yet the families who experience multiple stressors and competing demands on their time may be the least likely to attend parenting sessions. For instance, families with limited social support may have access to fewer alternative caregivers who can look after other children in the household or care for ill family members during sessions (Farrelly and McLennan 2009). We consider the predictors relevant for child maltreatment and harsh parenting (e.g., see Meinck et al. 2017), the key outcomes of the current intervention. Drawing on categories in previous research, we look at four groups of predictors: economic and educational barriers and resources, social and health barriers and resources, parenting and child behavior, and sociodemographic factors. Among the economic and educational factors, socioeconomic status is perhaps the most commonly examined predictor of participation in parenting interventions. A few studies have demonstrated lower attendance in families with lower socio-economic standing (e.g., Peters et al. 2005), while others did not find such an effect (e.g., Nix et al. 2009). The discrepancy is perhaps in part due to a mix of universal and high-risk samples, different interventions, study contexts, and methods of capturing the socio-economic situations of families. Studies have examined indicators such as family income (Haggerty et al. 2002), caregiver education (Nix et al. 2009), and family occupational prestige (Kazdin 2000). In LMICs such as South Africa, overcrowded household is another relevant indicator of family disadvantage (Meinck et al. 2017). Alternative measures may have different effects on participation. Many programs provide childcare, refreshments, and transport to ensure that the families with an economic disadvantage can participate. However, the effects of limited educational experience may be harder to address (Eisner and Meidert 2011). Social and health barriers studied in parenting interventions include parental depression, substance use, and social support (Morawska et al. 2014). Research has found significant relationships between caregiver depression, parenting stress, and dropout (Calam et al. 2002). However, results are mixed in terms of whether mental health affects participation, with some research identifying equal or higher engagement in the families with more mental health problems (e.g., Baydar et al. 2003;Smith et al. 2016). We also examine additional measures of mental and physical well-being shown in South Africa to predict child maltreatment, such as caregiver exposure to intimate partner violence, childhood maltreatment, and HIV status (Meinck et al. 2017). Examining the effect of pre-intervention parenting and child behavior, we may expect that families with greater difficulties find it more difficult to engage. On the other hand, according to the Health Belief Model, parents and children are more likely to participate if they perceive their family problems to be more serious. Indeed, some studies identified perceived family challenges as empirically significant predictors of higher parental participation (Baydar et al., 2016;Gorman-Smith et al. 2002). However, other studies found no relation (Eisner and Meidert 2011;Salari and Filus 2017). Finally, findings are also mixed on the impact of sociodemographic characteristics, such as child age and gender. Most interventions described in the literature targeted parents of young children and delivered training to caregivers only. As a result, predictors of child participation have been rarely examined in parenting research. However, increasingly, many programs for older children also include sessions for the young people, and studies have shown that child involvement can boost parental engagement (Fleming et al. 2015) and lead to more sustainable changes in the family. Therefore, we also examine predictors of child attendance using analogous predictors as for the caregivers. Lastly, there is little information on what may affect participation in LMICs. The findings on predictors of participation thus far primarily originate from interventions focusing on disruptive child behavior in HICs. One study examined a multicomponent program for caregivers of malnourished children in the Dominican Republic (Farrelly and McLennan 2009). The program focused on nutrition, but also included sessions on child behavior management. The researchers found that none of the eight hypothesized variables predicted attendance. In summary, research suggests a range of potential predictors affecting participation in parenting programs. In this exploratory study, we assess the effects of relative disadvantage on program participation in a LMIC setting. Given the inconsistent findings in previous studies and paucity of research with adolescents, we did not hypothesize specific predictor effects. This paper aims to contribute to the emerging evidence base by describing attendance and engagement in a parenting intervention among a high-risk sample in South Africa, and by examining the factors associated with the variation in attendance and engagement. Study Setting This study was nested within a cluster-randomized trial in the Eastern Cape Province, South Africa, that took place between April 2015 and August 2016. The trial enrolled 552 families in 32 rural and 8 peri-urban clusters, all in communities with high rates of poverty and unemployment. Half of the clusters (16 rural and 4 peri-urban) were randomized to each trial arm. A detailed description is available in the protocol (Cluver et al. 2016b). In short, the treatment arm received the Sinovuyo Teen parenting program, which aims to reduce physical and emotional maltreatment of children and improve parenting. The control arm received a 1-day hygiene information intervention. Study Sample In this study, each participating family enrolled one child and their primary caregiver, defined as the person mainly responsible for the child and residing in the same household at least four nights a week. The caregivers enrolled in the study were primarily mothers and grandmothers, and 3% of the caregivers were male. In the intervention arm, the children were 56% boys, aged 10 to 18 (M = 13.7, SD = 2.3). Although the recruitment focused on disadvantaged families, there was substantial variation in the predictors (descriptive information available online). The participants were recruited into the study through door-to-door recruitment and referrals from local community members, schools, and social workers. Recruitment focused on families that already experience conflicts and stress. To be enrolled in the study, families had to reply affirmatively to one of the screening questions on whether there are conflicts between the caregiver and adolescent in the household, and complete the two rounds of baseline assessments. All responses were kept confidential, except in cases of participants requesting assistance, or at risk of significant harm, such as children with recent suicide attempts. Families did not receive monetary incentives for participation, but were given small packs with snacks, stationery, and toiletries to thank them for participating. Intervention Characteristics Sinovuyo Teen is a 14-week manualized program based on social learning theory. The program was developed based on principles from existing research, such as modeling positive behavior and collaborative problem solving. During the development and piloting stages, the intervention was modified for the South African context (Cluver et al. 2016a, c). The intervention consisted of weekly group sessions (four separate and ten joint sessions for caregivers and children) and weekly home practice. Group sessions took place in a community location, such as a community hall or a school. Intervention groups included between 8 and 16 families, with 14 on average. The sessions lasted, on average, 1.8 h and were usually led by two facilitators. In addition, participants who were unable to attend a session received a brief home visit from the facilitators with a summary of the week's content. Sessions were facilitated by community members and social workers, trained by Clowns Without Borders South Africa, a local nongovernmental organization. Facilitators received an initial 5day training and ongoing weekly supervision and further training. Instruments and Measures Attendance and Engagement Outcomes The number of group sessions attended by children and caregivers in the intervention arm and their average engagement in the sessions were the primary outcomes of interest in this exploratory analysis. Both measures were collected through observations by the research team. Additionally, attendance was cross-checked with facilitator-recorded data. The research team observed 277 sessions, out of the total 279, and 32% of the sessions were double-coded by two observers. Fifteen local Research Assistants were involved in observations after a training in sensitive data collection, as well as in observational research. To measure the level of engagement in sessions, we used a behaviorally anchored 3-point scale (child or caregiver: 1, is quiet or distracted most of the time; 2, participated in parts of the session; 3, participated through most of the session). Baseline Predictors Baseline interviews were conducted by local Research Assistants and took place at participant homes or other venues, such as schools. All questionnaires were locally piloted in Xhosa. Tablets were used to administer questionnaires to participants. Below, we provide a summary of the baseline measures used as prospective predictors of attendance and engagement. We use similar, but not identical, baseline variables to predict child and caregiver outcomes. For instance, we use child report of parenting to predict children's participation, and caregiver report to predict caregiver participation. Similar to previous studies (e.g., De Los Reyes and Kazdin 2005), we find low correlations between the child and caregiver reports of the same constructs (between 0 and.24). Where possible, we use predictor and outcome information from the same informant as their perception is more likely to affect their own behavior. Economic and Education Barriers and Resources Family poverty was measured by the Basic Necessities Scale, asking how many household necessities for children families could afford (Pillay et al. 2006). Cronbach's α for this scale was 0.72 (8 items). Overcrowding was defined as having more than three people residing per room, per United Nations Humans Settlements Program definition. Caregiver education was a dichotomous indicator of whether the caregiver completed primary school. Social and Health Resources and Barriers As a measure of caregiver depression, we included the Centre for Epidemiological Studies Depression Scale, used previously with South African populations (Pretorius 1991). Cronbach's α for this scale was 0.86 (19 items). Child depression was measured by the Child Depression Inventory short form (Kovacs 1992), with Cronbach's α 0.64 (10 items). Child and caregiver HIV was assessed with the Verbal Autopsy Symptom Checklist (Lopman et al. 2006;Hosegood et al. 2004) and 6 specific items on HIV testing, ARV treatment and CD4 count. HIV status was determined with a conservative threshold of ≥ 3 AIDS-defining illnesses, or self-identification of HIV-positive status, or caregiver report of child status, as children may be unaware of their status. Alcohol and drug use among children was measured using two adapted items from the Alcohol and Other Drug Use Module developed by the World Health Organization for the Global School-Based Health Survey. Alcohol and drug use among caregivers was measured by four items developed by the research team to assess alcohol and drug use in the past month. Caregiver social support was measured with the Medical Outcome Study Social Support Survey (Sherbourne and Stewart 1991), Cronbach's α 0.95 (19 items). Caregiver experience of intimate partner violence was measured using a simplified version of the Revised Conflict Tactics Scale Short Form, Cronbach's α 0.85 (6 items). Caregiver history of maltreatment was assessed using an adapted version of the ISPCAN Child Abuse Screening Tool-Retrospective, measuring occurrence of abusive physical, sexual, and emotional events before the age of 18 ), Cronbach's α 0.71 (7 items). Perceived Parenting and Child Behavior Child maltreatment (physical abuse, emotional abuse, and neglect) in the past month was assessed using a culturally adapted version of the ISPCAN Child Abuse Screening Tool (Meinck et al. 2018;Zolotor et al. 2009). For the child report, Cronbach's α was 0.90, and for caregiver report, 0.79. Parenting approaches were measured by the Alabama Parenting Questionnaire (parent and child versions) (Frick 1991), an instrument widely used internationally, as well as in South Africa. As suggested by previous research with children of this age, and supported by exploratory factor analyses in this sample, we combined positive and involved parenting subscales. For the child report, Cronbach's α were 0.92 for positive and involved parenting (16 items), 0.76 for poor monitoring (10 items), and 0.68 for inconsistent discipline (6 items). For the caregiver report, Cronbach's α were 0.85 for positive and involved parenting (16 items), 0.72 for poor monitoring (10 items), and 0.55 for inconsistent discipline (6 items). Child externalizing behavior was measured using the rule-breaking and aggressive behavior scales of the Child Behavior Checklist 4-18 (Achenbach 1991). Cronbach's α were 0.85 (child report) and 0.90 (caregiver report), 35 items each. Sociodemographic Factors Other participant characteristics used in the analyses were age, gender, caregiver employment, and child's orphan status. Data Analysis For the sessions attended by two observers, we examined inter-rater reliability of participant engagement measure. The intra-class correlation coefficients were 0.79 (95% CI 0.75; 0.81) for child engagement and 0.85 (95% CI 0.83; 0.87) for caregiver engagement. Given the high reliability, we used averages of two observations for analyses. To evaluate whether there were any systematic differences between caregiver and child attendance and engagement, we conducted t tests, adjusted for clustering. To examine changes in individual engagement over time, we used session number as a predictor of engagement in a multilevel model. To examine the level of overall attendance, we calculated percentages of participants attending each week out of the total number enrolled. In the analyses of predictors, we used multilevel models to ensure that non-independence of data within clusters was appropriately taken into account (Hox et al. 2010). The analyses presented here include bivariate and multiple random-intercept models. We used restricted maximum likelihood estimation with Kenward-Roger correction, as recommended for samples with 10-20 clusters (McNeish 2017). As demonstrated elsewhere (Enders and Tofighi 2007), when the question primarily bears on variable relationships at the lowest level of the model (participants), group-mean centering of predictors provides an appropriate estimate of the relationship. Therefore, we used predictors centered around the cluster means. Thus, the regression coefficients represent pooled within-cluster relationships. The only group-level predictor (peri-urban or rural location) was grand-mean centered. To facilitate interpretation of continuous predictors other than age, they were standardized using pooled within-cluster standard deviations (available online). Participant engagement was standardized using the overall mean and standard deviation in the outcome. Because of the focus on participant-level variables, we used fixed slopes of predictors across clusters. A linear link function was used for all regression analyses presented here. As a sensitivity check, given the count nature of attendance outcomes, we also analyzed them using a negative-binomial link function with a robust sandwich estimator of variance and found the same pattern of results with one minor difference, discussed subsequently. All analyses were implemented in Stata/SE 14.2, except the inter-rater reliability calculations in R 3.3.0. Describing Participation in Sinovuyo Teen Caregivers attended 7.1 group sessions on average (50% of all sessions) and children, 9 (64%). Thus, children attended more sessions than caregivers, although the difference did not reach statistical significance: t(540) = − 1.59, p = 0.12. Alternative caregivers could attend sessions, but it was not common, with 43 cases recorded across 270 families in 279 sessions. Twenty-five (9%) caregivers and 14 (5%) children did not attend any group sessions. As part of the program design, families were approached for home visits throughout the 14 weeks if they did not attend a session, unless they chose to drop out of the study. None of the families dropped out of the study during the intervention. Only four families received no visits or sessions. Given the small number, they were included in the main analyses. Children's mean engagement in sessions was 2.03 (SD = 0.49), 68% of maximum on the measure, and caregivers', 2.40 (SD = 0.46), 80% of maximum. This difference between children and caregivers was significant: t(498) = 3.01, p < 0.01. Therefore, as a group, caregivers were rated as more actively engaged in the sessions they attended than children. For 25 (9.3%) caregivers and 17 (6.3%) children, engagement data were missing either because they did not attend any group sessions or were not rated by error. Consequently, the analyses of predictors for engagement have a smaller number of participants than the analyses for attendance. Attendance and engagement varied significantly across the clusters, with unconditional intra-class correlations of .24 for caregiver attendance, .06 for caregiver engagement, .12 for child attendance, and .15 for child engagement. Cluster's rural or peri-urban status explained .30 of between-cluster variance for caregiver attendance, .31 for child attendance, .14 for caregiver engagement, and .22 for child engagement. Relation Between the Outcome Variables Within families, there was a large correlation (.60) between the attendance of caregiver and child, and a medium correlation (.27) between the average engagement of caregiver and child. Given the size of the correlations, it was informative to analyze each of the outcomes separately to test their unique predictors. Figure 1 plots overall attendance across the 14 intervention weeks. While attendance did not consistently increase or decrease over time, there was an ongoing fluctuation. We observed that the dips in attendance approximately corresponded to the beginning of a new month when social grants were disbursed, and families traveled to receive them. Examining individual growth plots and longitudinal multilevel models, we found no linear trend for child engagement and a slight increase in caregiver engagement over time (B = 0.02, p < 0.001). Predictors of Attendance Several predictors were significantly related to caregiver attendance in the multiple regression analysis (see Table 1). Peri-urban or rural residence, alcohol and substance use, positive and involved parenting, caregiver age, gender, and caregiver employment showed unique relationships to attendance in a model that included all predictors simultaneously. Specifically, caregivers in peri-urban areas attended on average 3.08 fewer sessions than caregivers in villages. Caregivers with one deviation higher alcohol and substance use reported 0.50 fewer sessions attended (p = 0.048). Attendance was 0.67 sessions higher among caregivers who reported one standard deviation higher levels of positive and involved parenting. Older caregivers had a slightly higher attendance, with one additional year of age predicting 0.05 more sessions attended. Male caregivers (n = 8) attended 3.37 sessions fewer. Caregivers who had a job at baseline attended an average of 3.08 fewer sessions, compared to caregivers who did not report being employed at baseline, controlling for other predictors. Week numbers (dates) Fig. 1 Percentage of participants attending group sessions each week (August-November 2015) Several unique predictors were significantly related to child attendance: peri-urban or rural residence, overcrowded household, alcohol and substance use, inconsistent parenting, and child age (see Table 2). Similarly to caregivers, young people in peri-urban areas attended on average 2.29 fewer sessions than in the villages. On the other hand, children in overcrowded households attended 1.21 more sessions. Children with one deviation higher reported alcohol and substance use attended 0.58 fewer sessions. Older children attended fewer sessions, with one additional year of age predicting 0.39 fewer sessions attended. Finally, children who reported higher inconsistent parenting also attended more sessions-this finding, however, was impacted by interactions with other variables as there was no bivariate relation between the two variables. In the sensitivity analyses (available online), one result differed in terms of statistical significance. Using negativebinomial models, child alcohol and substance use was not a statistically significant predictor (p = 0.127) of attendance. Predictors of Engagement None of the predictors were significantly related to caregiver engagement in multiple regression (see Table 1). Examining the predictors of child engagement, two predictors showed unique relations to engagement: inconsistent parenting and child age (see Table 2). Children reporting one standard deviation more inconsistent parenting had 0.17 standard deviations higher engagement. Older children also had slightly higher engagement, with an additional year of age predicting 0.07 standard deviations higher engagement. VIF values ranged between 1.04 and 2.10, suggesting multicollinearity was not a concern. Discussion Delivering evidence-informed services can only be beneficial if families participate in them. In this study, we explored factors that influenced attendance and engagement for caregivers and children enrolled in a parenting program in a disadvantaged area of South Africa. The study did not yield evidence that family disadvantage was related to levels of attendance and engagement. This may be due to the program design including efforts to reduce known barriers to engagement, for instance, by providing transport. Overall, the session attendance rates in this study are somewhat lower than the average rates of around 72% reported in parenting program studies in HICs (Chacko et al. 2016). Other studies in LMICs report even higher attendance rates, such as 81.2% for caregivers and children 7-15 years old in a 12session program among Burmese migrant and displaced population in Thailand . One reason for this difference may be the provision of home visits in Sinovuyo Teen for all participants who missed a session, reducing the incentive to attend group sessions. There may be a trade-off between reaching participants with home visits and their additional costs. As in many previous studies, individual socio-economic status did not predict participation. However, participants in rural clusters attended more sessions than those in peri-urban areas, possibly due to fewer alternative services or leisure opportunities in the villages. South Africa's rural areas continue to have lower levels of public services, income, and government grants than peri-urban or urban locations (Coovadia et al. 2009). Moreover, we found higher attendance rates among children from overcrowded households. One interpretation is a higher perceived need for support by youth in overcrowded homes. The sessions may have also provided a break from a crowded home. Overall, these are encouraging findings that suggest parenting programs can successfully reach vulnerable families. However, we found that both caregivers and children with higher rates of alcohol and substance use have lower attendance, although the difference for children did not reach statistical significance in the sensitivity analyses. Thus, some social and health barriers, such as alcohol and substance use, can still hinder participation and need to be investigated further. Similar to much of the previous research, mental health and social resources, as well as parenting and child behavior, were generally not related to participation. However, several parenting dimensions did appear as significant predictors. Higher positive and involved parenting predicted higher caregiver attendance, and more inconsistent parenting predicted more child attendance and higher engagement. In line with other parenting studies, this sample included mostly female caregivers, and the few male caregivers attended much less frequently. This likely has to do with the social norms of women bearing the responsibility for childcare. Engaging men in parenting interventions requires a conscious effort in program design and delivery, such as drawing on specific motivations for fathers (Siu et al. 2017). In addition, children attended more sessions than caregivers, but had a lower average engagement in sessions, perhaps due to the cultural norms mandating that children show respect to elders. Children's lower engagement may also be related to the pedagogical style common in schools, with children as passive listeners. Consistent with previous research, time and logistics emerged as a major barrier to attendance. Caregivers who were employed had lower attendance-likely because the sessions took place on workday afternoons. It was not feasible due to safety issues to conduct sessions in the evening, which could help working caregivers. In the follow-up questionnaire, both children and caregivers cited other commitments as the most common reason for not attending: community events, such as church group meetings and funerals, family obligations, such as housework, and school commitments for children. In addition, sickness was an oft-cited reason for not attending. Monthly drops in attendance seemed to coincide with the time when participants traveled to obtain their monthly government grants and then shop for food and other necessities. While this has not been highlighted in previous literature, flexible scheduling to accommodate community events may be beneficial in rural settings. The study has several limitations. First, the limited statistical power requires replication of the results in other studies. Future intervention studies may benefit from incorporating pre-planned analyses of program enrolment, attendance, and engagement into their trial protocols with power calculations. Second, the findings may not be generalizable to other settings. For example, three out of the four peri-urban clusters included in the intervention were part of one township area. However, this area was very populous, with over 18,000 residents. Third, we were only able to examine variation among study participants, and do not know if the most disadvantaged families enrolled in the trial at similar rates. Future research in LMICs also needs to examine programmatic factors, such as recruitment strategies and relationship with the facilitators, and their interaction with attendance. Examining participant perceptions of barriers to treatment and caregiver causal attributions of children's behaviors in LMICs can inform interventions to boost engagement, such as motivational interviewing. This study contributes to the scarce literature on evaluating the delivery of family interventions in LMIC settings by examining a range of relevant predictors of attendance and engagement. This is also one of the few studies to examine child or adolescent attendance. Based on this study and other recent research, it appears that parenting support programs can reach and engage very vulnerable families. Moreover, the most vulnerable families in LMICs may be especially receptive to these programs. These findings have important implications for programming and policy. The next vital question is whether these levels of participation can be maintained when the programs are disseminated more widely in service settings.
2018-08-21T22:42:14.737Z
2018-08-18T00:00:00.000
{ "year": 2018, "sha1": "baa1c385bec49e79076afafbab3ff728cb4a4953", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11121-018-0941-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "baa1c385bec49e79076afafbab3ff728cb4a4953", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
202538872
pes2o/s2orc
v3-fos-license
Electronic correlation determining correlated plasmons in Sb-doped Bi$_2$Se$_3$ Electronic correlation is believed to play an important role in exotic phenomena such as insulator-metal transition, colossal magneto resistance and high temperature superconductivity in correlated electron systems. Recently, it has been shown that electronic correlation may also be responsible for the formation of unconventional plasmons. Herewith, using a combination of angle-dependent spectroscopic ellipsometry, angle resolved photoemission spectroscopy and Hall measurements all as a function of temperature supported by first-principles calculations, the existence of low-loss high-energy correlated plasmons accompanied by spectral weight transfer, a fingerprint of electronic correlation, in topological insulator (Bi$_{0.8}$Sb$_{0.2}$)$_2$Se$_3$ is revealed. Upon cooling, the density of free charge carriers in the surface states decreases whereas those in the bulk states increase, and that the newly-discovered correlated plasmons are key to explaining this phenomenon. Our result shows the importance of electronic correlation in determining new correlated plasmons and opens a new path in engineering plasmonic-based topologically-insulating devices. Recently, a new type of plasmon, correlated plasmons, has been theoretically proposed to occur due to electronic correlation in strongly-correlated materials [33]. Indeed, these correlated plasmons have been observed in the series of Mott-like insulating oxides [34,35]. Correlated plasmons differ from conventional plasmons in that they arise from the collective oscillation of correlated electrons as opposed to the collective oscillation of free charges. Due to the electronelectron interaction, correlated plasmons may have a positive, low real part of the dielectric function and can readily couple with free-space photons [33,34,36]. Because of this, they also have significantly lower loss than conventional plasmons, which could have benefits in devices utilizing topologically insulating materials. This motivates us to search and explore electronic correlation and new plasmons in TIs. In this work, we report unusual effects of temperature change on the electronic and optical properties of the topological insulator (Bi0.8Sb0.2)2Se3. Using a combination of angle-dependent spectroscopic ellipsometry, angle resolved photoemission spectroscopy and Hall measurements all as a function of temperature supported by first-principles calculations, we discover novel low-temperature high-energy correlated plasmons accompanied by anomalous spectral weight transfer in topological insulators. Upon cooling, the spectral weight transfer of free charge carriers occurs between the surface and the bulk states, which is due to electronic correlation and the scattering of surface state free charge carriers from the correlated plasmons at lower temperatures. Experimental methods and calculations: Sample preparation and ARPES. High quality single crystalline sample of (Bi0.8Sb0.2)2Se3 is grown by Bridgman method [37]. The single crystal is cleaved in UHV by standard Kapton-tape method to obtain good quality surface for the ARPES measurement. The ARPES experiment is performed at CNR-IOM APE beamline [38] at Elettra Sincrotrone Trieste. The Fermi surface maps are performed using Scienta DA30 electron analyzer without rotating the sample. The data are collected at liquid nitrogen temperature (T = 78 K), and at chamber base pressure better than 1 × 10 −10 mbar. The temperature dependent ARPES and XPS measurements are also performed at soft X-ray and ultraviolet (SUV) beamline of Singapore Synchrotron Light Source [39]. The ARPES measurements are carried out using laboratory Helium lamp source (photon energy = 21.21 eV), while XPS measurements are carried out using a laboratory Al (photon energy = 1486.6 eV) X-ray source. Spectroscopic Ellipsometry. Spectroscopic ellipsometr y measurements are carried out using a variable-angle spectroscopic ellipsometer (V-VASE, J.A. Woollam Co.) with a rotating analyzer and compensator at the Singapore Synchrotron Light Source (SSLS). The spectroscopic ellipsometry is a non-destructive photon-in photon-out technique, therefore it does not have charging issue and can be used to probe simultaneously loss function and complex dielectric function in correlated electron systems. Also, spectroscopic ellipsometry is not as surface sensitive as photoelectron spectroscopy techniques, where the light penetration depth is tens or even hundreds of nm as shown by the attenuation length in Supplementary Fig. S2 [40], which is calculated from the complex dielectric function in Fig. 2. The measurements are taken in the energy range of 0.6  6.0 eV whilst the sample was inside an UHV cryostat with a base pressure of 10 -8 mbar. The measurements are taken at angles of 68, 70 and 72, which are limited in range by the UHV windows. Measurements are taken at a range of temperatures from 77 K to 475 K at an angle of 70 o . As the sample is a thick single (>10 μm) crystal, the complex dielectric function can be determined through the best fits to the output data  and , which are shown in Supplementary Fig. S1(a) and S1(b) respectively [40]. As single crystal (Bi0.8Sb0.2)2Se3 is anisotropic, the spectroscopic ellipsometer is used in the Mueller-Matr ix mode with a rotational sample stage. Further details can be found in Refs. [41] and [42]. Using the W-VASE analysis program, a model of the sample is created, which includes surface effects such as roughness and contamination. As the process of spectroscopic ellipsometry is a self-normalis ing technique, the complex dielectric function of the sample can be determined through the best fits to the output data and Δ [41][42][43][44]. Hall Measurements. The Hall effect measurement is performed in a physical properties measurement system (PPMS), where a bar-shape device was prepared, and silver paste is used as the electrodes. First-Principle Calculations. The first-princip les calculations are performed using density-functional theory based Vienna ab initio simula tio n package (VASP) [45,46] with the Perdew-Burke Ernzerhof (PBE) functional and projector augmented wave (PAW) potentials [47]. In all calculations, the cutoff energy for the electronic planewave expansion is set to 500 eV, and spin-orbital coupling effect is included. The criterion for electronic energy convergence is set to 1.0×10 −6 eV. The lattice constants of the pristine Bi2Se3 bulk are fixed to experimental values (a=b=4.14 Å and c=28.64 Å) [48,49], while the atomic positions are optimized with the van der Waals correction (DFT-D3) until the force is smaller than 0.01 eV/ Å [50]. The topological surface states are calculated using six quintuple layers (QL) of the Bi2Se3 and 20 Å vacuum layers normal to the surface. The two Bi atoms in the top and bottom QL layers are substituted by the Sb atoms to simulate the Sb doping effect, as shown in Figure 5. -centered 6 × 6 × 6 and 9 × 9 × 1 k-point meshes are used to sample the first Brillouin zone (BZ) of the Bi2Se3 primitive cell and the surface slabs, respectively. Results: The high-resolution ARPES results of (Bi0.8Sb0.2)2Se3 presented in Fig. 1 The bulk free carrier density of our system can now be estimated using these ARPES results. We may use the approximation of parabolic bands to find the free carrier density in the bulk of the system [56,57]: where is the binding energy of the bottom of conduction band, is the bulk free carrier density and m* is the effective free charge carrier mass. Considering an effective mass ( * ) value of ≈ 0.15 , typical for a Bi2Se3 system [58], a bulk free carrier density of 1.69×10 19 cm -3 is obtained at 77 K. Figures 2(a) and 2(b) show the real and imaginary parts of the dielectric functio n respectively, modelled from spectroscopic ellipsometry data taken over a range of temperatures from 475 K down to 77 K. Interestingly, we find anomalous spectral weight transfer in a broad energy range, from high energy to low energy upon cooling (as further discussed later). Spectral weight transfer is a fingerprint of electronic correlation [59][60][61][62]. There is a clear change in both parts of the complex dielectric function as the (Bi0.8Sb0.2)2Se3 is cooled, especially from 300 K to 250 K. There is also an edge that occurs in the 2 spectra at 0.95 eV for all temperatures that gets sharper as the sample is cooled. Spectroscopic ellipsometry is a powerful tool used to search for plasmonic activity in correlated electron systems [34]. For photons below X-ray energy levels, the crystal momentum is much higher than the momentum transfer (q), therefore q is finite but approaches zero. In this limit, the distinction between the longitudinal and transverse () vanishes, which allows spectroscopic ellipsometry to probe both optical and plasmonic properties of materials in the low-q limit through the loss function, which is calculated from the complex dielectric function [34, 63,64]. Our main finding is a correlated plasmon at ~1 eV. Figure 2c shows a comparison of the loss function and 2 for all temperatures measured between 475 K and 77 K in the spectral region 0.6 eV -1.4 eV. Interestingly, the loss function peak at ~1 eV occurs upon cooling when the spectral weight of 2 at higher energy is reduced [as shown in Fig. 2(b) and discussed later in Fig. 3(c)]. Such a spectral weight transfer is a fingerprint of electronic correlation, which is mainly responsible for the correlated plasmons [33,34]. We note that this loss function peak is blue-shifted from the optical excitation in 2 at 0.95 eV, while 1 is still positive due to electron-electro n interaction. The significant change in the spectral weight of 2 as the (Bi0.8Sb0.2)2Se3 is cooled is further highlighted by the difference in the complex dielectric function with temperature shown in Supplementary Fig. S1(c) and S1(d) [40]. This is the first time that correlated plasmons have been detected in topological insulators. In order to find the origin of correlated plasmons, we quantitively explore the spectral weight transfer that can be seen in the topological insulator's electronic response to external electromagnetic fields with temperature. In particular, we look at the optical conductivity, which is related to 2 by where 1 is the real part of the complex conductivity and  is the photon frequency. links the integral of the optical conductivity across the whole spectrum to the free-charge carrier density, ne [34,63]. Therefore, the integration of a part of the spectral region between E1 and E2, given by: = ∫ 1 ( ) spectral region. By analyzing the change of W over the spectral range 0.6 eV -6.3 eV for each of the temperatures measured, as shown in Fig. 3(b), we can gain an insight into the behavior of the free charge carriers as the Bi2Se3 sample is cooled [34,63]. There is a slight increase in the spectral weight as the sample is cooled from 475 K to 300 K, before a sharp decrease down to 250 K. There is then a gradual increase back up to 2.7×10 4 (cm) -1 at 77 K. This indicates a drastic loss of electrons with energies between 0.6 eV -6.3 eV between 300 K and 250 K. The energies required to shift the electrons outside of this spectral range are of the order of eV's and this rules out thermal energy transfer as the energies associated with temperatures below 500 K are too small (<43 meV). Therefore, the extra energy gained or lost must come from potential energy transfer i.e. electron-electron correlations. The drop in the conductivity, and thus electron density, that occurs at temperatures of 250 K and below also coincides with the appearance of the correlated plasmons seen in Fig. 2(c) at this temperature. It is a subject of a future study as to why this occurs below 250 K, however it may be worth to note that a previous theoretical study on a topological insulator BiTlSe2 has shown that electron-phono n interactions became significant below 250 K and the system might enter the topologica lly insulating phase [65]. The optical conductivities for each temperature shown in Fig. 3(a) are divided into three spectral regions, which are then integrated across each region to give the results shown in Fig. 3(c). The positive region of the y-axis indicates an increase in the overall energy of the spectral range, whilst the negative region indicates a decrease in the overall energy. The lower energy region shows only a minor increase in spectral weight as the sample is cooled whereas the mid-energy region, between 1.60 eV and 2.75 eV, shows a larger increase. The high-energy region initia lly shows an increase in spectral weight followed by a massive decrease between 300 K and 250 K. A smaller decrease is also present in the mid-energy region between these temperatures, but a slight increase is seen in the low-energy region. As the largest loss of electrons occurs within the high-energy region, it appears that electrons are gaining energies of the order of eV's to move outside of the upper measured limit of the spectral range rather than losing energy. As previously stated, this must be due to an increase in potential energy from long-range electron-electro n correlations and the formation of correlated plasmons, which can only happen if electronic screening is being enhanced at these lower temperatures. It is found that although the integrated conductivity (i.e. the electron density) increases somewhat as the temperature is lowered from 100 K to 77 K, there is still an overall loss in that spectral region. There is also a subsequent increase of integrated conductivity and electron density within the middle region due to the main spectral features, which are not directly related to the correlated plasmons, but which contribute to the overall increase seen in Fig. 3b. Using the optical conductivities from Fig. 3, and the charge carrier mobility, e, of (Bi0.8Sb0.2)2Se3 in the following equation: = (6), the free charge carrier density, ne, of both the bulk and surface states can be calculated. Note that the electron mobility of (Bi0.8Sb0.2)2Se3 may be different from pure Bi2Se3 due to Sb doping [66]. is very close to the calculated bulk free carrier density from the ARPES measurements of 1.69×10 19 cm -3 . This can then be compared with the electron mobility and carrier density from pure Bi2Se3 [66]. The carrier density of Bi2Se3 at 77 K is slightly lower at 1.3×10 19 cm -3 , however, the mobility is 1380 cm 2 /Vs, which is very close to the (Bi0.8Sb0.2)2Se3 measurement. Using this mobility, the measured optical conductivity and equation (6), the total free charge carrier density is calculated to be 1.20×10 20 cm -3 at 78 K. By comparing this with the calculations of the electron density in the conduction band from the ARPES measurements, it can be seen that there is almost an order of magnitude differe nce between these estimates. This is because the electron density extracted from the optical conductivity is the total free electron density for both the bulk and the surface, whereas the electron density from the ARPES measurements is from the conduction bands in the supposedly insula ting bulk of the sample. However, the bulk states can be considered a bad conductor in most cases or more accurately as a weaker conductor than the surface states, because in reality, few topologica lly insulating samples rarely achieve a truly insulating bulk due to the Mott criterion and Ioffe-Regel criterions [56]. With these calculated electron densities, the percentage of carriers from the surface states that contribute to the overall conduction is estimated at 85.9%. At 300 K, the electron mobility of (Bi0.8Sb0.2)2Se3 is determined to be 993 cm 2 /Vs from the Hall measurements as shown in Fig. 4(b) and the bulk carrier density of 1.56×10 19 cm -3 for our sample. The carrier density is Bi2Se3 at 300 K is again slightly lower at 1.4×10 19 cm -3 and the mobility is measured to be 880 cm 2 /Vs, which is also smaller than the Sb-doped Bi2Se3 [66]. By using equation (6) again with the measured mobility and conductivity, the total free charge carrier density is calculated to be 1.70×10 20 cm -3 at 300 K. This is a significant increase from the charge carrier density at 78 K. By using the bulk carrier density from the Hall measurements, the percentage of free carriers coming from the surface states of room temperature Sb-doped Bi2Se3 is estimated to be 90.8%, which is higher than the estimate at 78 K. This result shows that by decreasing the temperature of the TI, the overall free charge carrier density is lowered but the bulk carrier density has increased. This can only happen if free charge carriers are being scattered from the surface states to the bulk states. However, it is well known that the scattering of surface state charge carriers in topological insulators is extremely limited. This is because back-scattering from non-magnetic impurities is prohibited by timereversal symmetry [4]. Other methods of scattering are also negligible, as phonons are too weak for electron scattering [67] and surface Dirac plasmons have energies of the order of 10 meV (in the THz regime) which are not enough to cause significant scattering [18]. However, since the correlated plasmons seen in Fig. 2 have energies of the order of 1 eV, this is enough to induce electron scattering from the surface states to the bulk as seen in other 2D materials such as graphene [68]. The coupling of electrons with these low-temperature correlated plasmons is the most likely reason behind the transfer of free charge carriers from the surface to the bulk conduction bands as the temperature is lowered. Our analysis is further supported by first-principles calculations based on the density functional theory. The calculated band structure of Sb-doped Bi2Se3 surface slab is shown in Fig. 5(a), along with the projected contribution of surface QLs (red color) and central QLs (blue color). We can see that with the Sb doping, the surface Dirac cone is shifted slightly below the Fermi level, indicating that the Bi2Se3 stays n-doped after the Sb incorporation. These excess electrons might induce stronger Columbic interaction among the electrons, resulting in altered plasmonic properties. The relativistic spin-orbit coupling effect is self consistently taken into account in the calculations. It is noteworthy that the spin-orbit coupling plays an important role in the formatio n of electronic bands of TIs, and significantly affect the optical properties as seen in related materials [69,70]. From the band structure, the contribution from the central QLs (bulk bands) is noticeable in the valence bands near the Fermi level. It hybridizes with the surface QLs (surface bands), forming the lower Dirac cone. This infers that with Sb doping, the electronic states from the bulk region extend to the surface layers, consistent with the experimental observation. The hybridization between surface and bulk carriers is evidenced by the visualized partial charge density. As Fig. 5(b) shows, the carriers in the Dirac cone below the Fermi level are mainly from the surface QLs, but the contribution from the bulk region can be seen as well. The projected density of states (PDOSs) further corroborates the above observations. The contribution to the DOSs near the Fermi level is found predominantly from pz orbitals of Bi atoms in the surface QLs, which are hybridized with those from bulk QLs as shown in Fig. 5 (c) and (d). This is in contrast with that of pristine Bi2Se3 (seen in Supplementary Fig. S5 [40]), where only the electrons in the surface QLs contribute to the surface states. Thus, more conducting electrons are found near the Fermi level in the Bi2-xSbxSe3 than those in the pristine Bi2Se3. It may be worth to note that it has been theoretically studied by Zhu et al. [36] that local field effects such as those due to spin-orbit coupling may play an important role that causes the spectral weight transfer thus generating unconventional plasmons at low temperatures. Conclusions: In summary, by lowering the temperature of (Bi0.8Sb0.2)2Se3 below 250 K, we have discovered the existence of low-loss correlated plasmons at high energy due to the electronic correlation. This is achieved through the determination of simultaneously complex dielectric function, loss function, and electronic structure and dispersion of the material as a function of temperature using spectroscopic ellipsometry, ARPES, Hall measurements and first-princ ip le calculations. We reveal spectral weight transfer of free charge carrier density from the surface to the bulk as the temperature decreases. This spectral weight transfer in the topological conductivity of the material is due to electrons in the surface states scattering into the bulk states from the high-energy correlated plasmons at low temperatures. By controlling the correlated plasmonic behavior in Sb-doped Bi2Se3 through temperature changes, the topological conductivity of TIs can be manipulated, which may lead to a new advanced control over future plasmonic devices. Note on the effect of moisture and other environmental contamination: In order to better understand the effect of temperature and moisture on the sample, we have recorded the ARPES spectra at both room temperature and liquid nitrogen temperature. We observed a shift of 65 meV in the Dirac point towards the higher binding energy at low temperature (See Supplementary Fig. S4 a-c). The shift in the valance band could be due to a combination of temperature, moisture [1] and other environmental effect, which introduces more n-type doping to the system. Phonon Interaction by Dirac Plasmon Engineering in the In order to have a better ideal of the depth dependence, we have also performed XPS measureme nt at both high and low temperature using laboratory X-ray (ℎ = 1486.6 eV), which is relatively more bulk sensitive than the UV light (ℎ = 21.21 eV) used for ARPES experiment. We have measured the XPS spectra for both in-situ cleaved and outside air cleaved samples. We do not see any significant changes in the peak height ratio of Bismuth, Antimony and Selenium. The results are presented in Supplementary Fig. S4 d. We only observed the characteristic Bi 5 5/2 and Bi 5 3/2 doublet peaks corresponding to Bi-Se bonding. We do not see any signature of Bi bilayer peak appearing in our samples as observed by M. T. Edmonds et al. [2] in pure Bi2Se3 system after atmospheric exposure. The authors also reported large reduction in the Se 3 core level peak due to Bismuth bilayer formation, which we do not observed in case of our Sb-doped Bi2Se3 samples in our experimental conditions. Furthermore, the authors reported several hundred meV shift in the peak position, which was not seen in our case. We have not observed any signature of the sample surface getting oxidized in our experimental conditions [3]. We believe that the changes in the valence band (ARPES) spectra occurs due the changes in the immediate surface vicinity, while the bulk of the sample remains unaffected by the moisture or another environme nta l contamination. As UV ARPES is extremely surface sensitive, the surface contamination effects are expected to show up only in the ARPES data. On the other hand, spectroscopis ellipsometry technique is not as surface sensitive as photoelectron spectroscopy techniques, where the light penetration depth is tens or even hundreds of nm (c.f. Supplementary Fig. S2). It is noted that the effect of moisture in shift the ARPES spectra is a function of time, longer time in the cold manipulator introduces gradual shift in the spectra which eventually stabilizes. The ellipsometry spectra are not function of time and reproducible over different heating/cooling cycles. We believe that the changes in the optical spectra are solely due to the temperature, and not due to the minor changes in the immediate vicinity of the surface caused by the moisture. Supplementary Fig. S5(b) shows the atomistic structure of the 6 QL layer Bi2Se3 surface slab with the green areas highlighting the origin of the partial charge density that contributes to the states closest to the Fermi level [shown as the green shadow in Supplementary Fig. S5(a)]. All of the contributions to the surface states come from the surface layers whereas in Bi2-xSbxSe3 there are more contributions from the bulk QL layers as seen in Fig. 5(b). Supplementary Fig. S5(c) and S5(d), respectively. The DOSs closest to the Fermi level are dominated by the pz orbitals of the Bi
2019-09-06T03:44:42.000Z
2019-09-04T00:00:00.000
{ "year": 2019, "sha1": "2642989b0f360217f55bc854b6e16b6c6df0a327", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1909.02703", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2642989b0f360217f55bc854b6e16b6c6df0a327", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
73564961
pes2o/s2orc
v3-fos-license
Detection of SARS-CoV Antigen via SPR Analytical Systems with Reference Surface Plasmon Resonance (SPR) is an extremely sensitive optical technique to detect the changes in refractive index occurring at the metal interface, which can monitor the processes in real time without labelling requirements (Lee et al., 2006; Kanda et al., 2004; Homola, 2003). Since 1989, our group have been focusing on SPR technology development. Several generations of SPR analytical platforms, including Reference SPR Analysis System, Electrochemical-SPR Analysis System, Portable SPR Analysis System, and HighThroughput, Multi-analyte Imaging SPR Analysis System, have been designed, fabricated and tested for various applications. SARS (Sever Acute Respiratory Syndrome) is a kind of serious infection, which caused tremendous loss in China at 2003. It is caused by the infection with SARS-associated coronavirus (SARS-CoV) (Rota et al, 2003) and early diagnosis of SARS is very important for better control of future SARS epidemics (Drosten et al, 2003). There have been several diagnostic methods currently used in hospital, but none of them can be used for early detection with low cost. For example, ELISA cannot detect antibodies that were produced in the infected patients within the first two-week’s infections.. In this chapter, a novel detection method of the SARS-CoV antigens was reported by using a home-developed Reference SPR Instrument, which can monitor reaction spots with control in a single flow cell. Monoclonal antibodies to SARS-CoV were immobilized only on the reaction spots via selective chemical modification, with no immobilization on the reference spots. Then a higher version of reference SPR analytical system: high-throughput, multianalyte imaging SPR (HMI-SPR) analytical system, was developed and tested successfully Introduction Surface Plasmon Resonance (SPR) is an extremely sensitive optical technique to detect the changes in refractive index occurring at the metal interface, which can monitor the processes in real time without labelling requirements (Lee et al., 2006;Kanda et al., 2004;Homola, 2003). Since 1989, our group have been focusing on SPR technology development. Several generations of SPR analytical platforms, including Reference SPR Analysis System, Electrochemical-SPR Analysis System, Portable SPR Analysis System, and High-Throughput, Multi-analyte Imaging SPR Analysis System, have been designed, fabricated and tested for various applications. SARS (Sever Acute Respiratory Syndrome) is a kind of serious infection, which caused tremendous loss in China at 2003. It is caused by the infection with SARS-associated coronavirus (SARS-CoV) (Rota et al, 2003) and early diagnosis of SARS is very important for better control of future SARS epidemics (Drosten et al, 2003). There have been several diagnostic methods currently used in hospital, but none of them can be used for early detection with low cost. For example, ELISA cannot detect antibodies that were produced in the infected patients within the first two-week's infections.. In this chapter, a novel detection method of the SARS-CoV antigens was reported by using a home-developed Reference SPR Instrument, which can monitor reaction spots with control in a single flow cell. Monoclonal antibodies to SARS-CoV were immobilized only on the reaction spots via selective chemical modification, with no immobilization on the reference spots. Then a higher version of reference SPR analytical system: high-throughput, multianalyte imaging SPR (HMI-SPR) analytical system, was developed and tested successfully Theory Surface plasma resonance (SPR) is a physical phenomenon which occurs when a polarized light beam is projected through a prism onto a thin metal film (gold or silver). At a specific angle of the projected light, resonance coupling between light photons and surface plasmons of the gold can occur when their frequencies match. Because the resonance leads to an energy transfer, the reflected light shows a sharp intensity drop at the angle where SPR takes place. Resonance coupling of the plasmons generates an evanescent wave that extends 100 nm above and below the metal surface. As an analytical tool it is important that a Biosensors 170 change in the refractive index within the environment of the evanescent wave causes a change of the resonant angle, when the sharp intensity drop can be observed. Binding of single target biomolecue to immobilize single molecule on top of the metal (gold) surface of the sensor leads to a change of refractive index and could be recorded as a change of intensity in the reflected light by a detector. This setup enables real-time measurement of biomolecular interactions, with refractive index changes proportional to mass changes. Generally the Kretschmann configuration and resonance angle modulation are used in SPR. Kretschmann configuration (Kretschmann & Reather, 1968) is the most-common geometry setup of SPR platforms, in which the incident light comes from the high refractive index medium (prism) and reflects at the gold surface without travelling through the liquid. The resonance angle is defined according to the resonance peak of SPR, which is further defined based on the SPR curve, obtained by recording the change of light intensity of the reflective light along with the change of angle of the incident light. The resonance peak of SPR is at the lowest point of the SPR curve, where light absorption is the maximum and the light intensity of the reflective light is the minimum under a specific angle of the SPR curve. SPR is a well established technique for observing biomolecular binding reactions. In a usual setup, light traveling through a high refractive index (RI) substrate reflects from the substrate surface, which is coated with a thin layer of gold (50 nm). Certain biomolecules have been immobilized on the gold surface, which bind the targeted biomolecules in aqueous samples. As targeted biomolecules bind to the gold surface, the surface RI increases roughly proportional to the quantity of the molecules immobilized. The change of the surface RI results in the shift of the resonance peak of SPR and detection of the shift of the resonance peak of SPR monitors the reaction between biomolecules quantitatively such as the concentration of targeted molecules. Since the shift of the resonance peak of SPR could also be brought by other disturbances (e.g. temperature changes), it is necessary to use control signals to distinguish the shifts caused by the reactions from the shifts by other interferential factors. Reagents and materials Protein A (from Staphylococcus aureus), EDC, NHS, HEPES and ethanolamine were purchased from SIGMA company. Human Immunoglobulin G (IgG), rabbit IgG, goat anti human IgG, and goat anti rabbit IgG were purchased from Beijing Xinjingke Biotechnology Company. Sterilized SARS-CoV (PUMC-01 strain, TCID50 is 10 6 puf/ml) and monoclonal antibodies (11G9, 2F1, 13D10, 11H12) to this virus were supplied by the Institute of Laboratory Animal Science, CAMS & PUMC. The others were purchased from Beijing Chemical Reagents Company. Bare gold slide patterning The patterns of bare gold slides with reaction areas and reference areas were designed in Ledit and transferred to a photolithograph mask of chromium. The fabrication was as follows. The glass slides were spin coated with positive photoresist (AZ 1500) and patterned by an MA4 optical stepper (Karlsuss, Germany). The exposed photoresist was developed and the glass slides were coated with 2 nm of chromium plus 50 nm of gold in a JGP560B3 type magnetron sputtering instrument (SDY Technology Development Co., LTD, Chinese Academy of Sciences, China). After removing photoresist in acetone, the bare gold slides with patterned areas were finally obtained (Fig.1). Prior to each experiment, bare gold slides were washed in ethanol to remove fingerprints, oily residues and dust particles. SPR chip preparation The sensor chips were prepared in the following procedure. Firstly bare gold slides were treated in piranha solution ((H 2 O 2 35%): (H 2 SO 4 96%) = 3:7) at 90°C for an hour. After the slides were washed by deionized water and ethanol, they were soaked in the 16-mercaptohexadecan-1-ol solution (5×10 -3 mol/L 16-mercapto-hexadecan-1-ol in 80% ethanol and 20% water) to obtain a hydrophilic surface. Then the slides were soaked in the solution including 0.4mol/L sodium hydroxide, diglyme and 0.6mol/L epichlorophdrin at 25°C for 4h. After the slides were thoroughly washed by deionized water, ethanol and deionized water sequentially, they were soaked in the dextran solution (3.0g dextran T500 in10ml 0.1mol/L NaOH) at 25°C for 20h. Finally, a bromoacetic acid solution was only dropped on the detection gold areas of the slides at 25°C for 16h. A thin layer of carboxymethyl dextran was formed only on the detection gold areas of the slides, while nothing was immobilized on the reference gold areas. Protein immobilization online After a SPR chip was mounted in the home-developed Reference SPR Analytical System, the angle scanning was conducted to choose a proper position for the fixing angle detection. HBS buffer (HEPES, 0.01mol/L; NaCl, 0.15mol/L; Tween 20, 0.05%, pH7.4) was firstly introduced to wash the chip. Then the mixed solution of EDC and NHS (final concentrations were 0.2mol/L and 0.05mol/L, respectively) was used to activate the SPR chip. After activation, the chip was washed with HBS buffer. Then a protein solution (pH of the solution is less than PI of the protein) was introduced into the chip to react with the activated area for protein immobilization. In order to eliminate the activated sites without immobilizing proteins, the chip was deactivated by flushing in Ethanolamine (1M) to remove molecules adsorbed loosely on the chip (Johnsson et al, 1991). SPR instrument setup As shown in Fig. 2, a Reference SPR Instrument System, with the function of qualitative and quantitative measurements, was manufactured by our group. The reaction area and the Reaction area Reference area www.intechopen.com control area of the system could be simultaneously detected in one flow channel by using this new Reference SPR Instrument System, resulting in recording two signals at one run in which one signal shows the curve of the reaction, and the other shows the curve of reference simultaneously. The range of the angle adjustment is 40°~70°, which makes it feasible for both the gas sample and liquid sample measurements. The detetion resolution of the resonance angle is 0.001°. And the detective range of refractive index is from 1.04 to 1.43 with a sensitivity better than 1×10 -5 RIU. Design of home-developed reference SPR analytical system As shown in Fig. 3, the reference SPR analytical system consists of a semiconductor laser (1), a system of a polarizing filter and lens (2), a prism (3), a high resolution detector (4), a computer (5), a sampling system (6), a SPR chip with reference (7) and a microfluidic devive with a microchannel (8). The laser and the optical system were installed on a circumrotated arm, while the high-resolution detector was installed on another circumrotated arm. The prism, the SPR chip and the microfluidic device were installed on an anti-floating surface between the two circumrotated arms. Both circumrotated arms were controlled by the stepper motor with the coding system. During experiments, a beam of the light from the semiconductor laser with the wavelength of 650nm through the optical system shined in the prism and reached the SPR chip. Then the beam of the light reflected into the detector. Finally, the intensity of the reflected light was recorded and exported to the computer. The sampling system controlled two peristaltic pumps and different solutions were introduced to the SPR chip by using the microfluidic device. Detection of SARS-CoV by immobilizing antibodies directly In the type I design, the antibodies to SARS-CoV were immobilized directly on the experiment area after the chip was activated. The solution of the monoclonal antibodies to SARS-CoV (pH 4.6) was pumped into the microfluidic device for 2000s. Then ethanolamine (1M, pH 8.5) was pumped in to deactivate the chip for 10 min. Finally HBS buffer was pumped in to wash the SPR chip. The signal recording of the immobilization step is shown in Fig. 4. When introducing the monoclonal antibodies to SARS-CoV, the reaction line increased significantly and the light intensity of the reaction line after the deactivation was still higher than the intensity before the antibody immobilization. This light intensity increase of the reaction line with reference line unchanged demonstrated that the monoclonal antibodies to SARS-CoV were successfully immobilized on the reactive area. After the antibodies were fixed on the SPR chip, sterilized SARS-CoV (1:5 diluted with acetic acid buffer, pH 4.4) was pumped into the microfluidic device and the reaction between the antibodies and SARS-CoV was recorded in Fig.5. However, from the Fig. 5, there was no obvious light intensity increase of the reaction line, which may be caused by spatial obstacles which keep antibodies from contacting SARS-CoV. Fig. 5. Detection of sterilized SARS-CoV by immobilizing antibodies directly The green line is a reference curve. The red line is a reaction curve. Detection of SARS-CoV by using Protein A In the type II design, 0.5mg/ml Protein A solution was pumped in after the activation step. The solution of the monoclonal antibodies to SARS-CoV (pH 4.6) was flushed in after the deactivation step, aiming to bind with immobilized protein A. Then the chip was washed with PBS buffer until the base-line signal was stable. After that, the same sterilized SARS-CoV solution (1:5 diluted with acetic acid buffer, pH 4.4) was pumped into the microfluidic device to react with immobilized antibody. The entire procedure was recorded, shown in Fig. 6, which demonstrates that the antibody to SARS-CoV was immobilized efficiently onto the reaction area of the SPR chip with the detection signal of SARS-Cov increasing to 60 units. It is 55 units higher than the reference signal with the binding rate about 1.4unit/min. Protein A has pseudoimmune interaction with the Fc fragments of immunoglobulins (Cedergreen et al, 1993) that could extend the distance between the antibodies and the surface so that the free degree of the antibodies could be increased. At the same time Fab fragments which react with antigens could face the solution side due to Protein A. Protein A seems to be a useful agent for antibody immobilization by increasing the binding efficiencies between antigens and antibodies. Reference area was used to detect absorption of impurity in analyte, in order to eliminate the disturbance of non-specific absorption. The sensitivity of SARS-CoV detection reached 1. 66 ×10 4 PFU/mL. Detection of IgG by using an improved reference SPR system This Reference SPR Analytical System used to detect SARS-CoV is an analytical instrument with one channel and two parameters in which only two analytes in one sample would be detected. The next generation of the reference SPR system: the high throughput, multianalyte imaging SPR (HMI-SPR) analytical system, has been designed, fabricated and tested, shown in Fig. 7. Fig. 7. Photograph of the high throughput, multi-analyte imaging SPR analytical system. A 5-spots sensor chip was used as an example to briefly demonstrate the functions of HMI-SPR instrument. Firstly the 5-spots rare gold slides were modified by means of chemical methods. A thin layer of carboxymethyl dextran was immobilized on the gold spots of the slides to form a carboxyl-terminated surface. Then the mixed solution of EDC and NHS (final concentrations are 0.2mol/L and 0.05mol/L) was used to activate the surface. 0.5μL solutions containing 1mg/mL rabbit IgG were dotted on the left two gold spots, while 0.5μL solutions containing 1mg/mL human IgG were dotted on the right two gold spots with the middle gold spot unmodified. After the solutions were dotted on the gold areas, rabbit IgG and human IgG reacting with the dextran on the rare gold areas for 10min were fully immobilized on the chip. Then the sensor chip was deactivated by dipping in the ethanolamine solution of 1mol/L. During this process, the proteins which didn't bind to the sensor chip firmly were removed. After that, the sensor chip was put into the deionised water to wash away the remained salt solution. Finally, the modified sensor chip was mounted in the HMI-SPR system for different immunoassay tests. The process of detection was conducted with the results of 5 real-time curves for 5-spots chip, shown in Fig. 8a. At first, HBS buffer was introduced for getting the base lines for 400s and then a solution containing 1.2mg/mL goat anti rabbit IgG was injected and stopped for reaction for 300s. Secondly, HBS buffer was introduced again for recovering the base lines for 400s and then a solution containing 0.5mg/mL goat anti human IgG was injected and stopped for reaction for 300s. Finally HBS buffer, HCl buffer and HBS buffer were introduced sequentially in which HCl buffer was used to bring the sensor chip back to the o r i g i n a l s e t u p . A s s h o w n i n F i g . 8 a , w h e n g o a t a n t i r a b b i t I g G w a s i n j e c t e d , t h e immunoreaction occurred at rabbit IgG binding areas (curves 1 and 2). When goat anti human IgG was injected, the immunoreaction occurred at human IgG binding spots (curves 4 and 5). After HCl buffer was injected, all the gold areas of the sensor chip restored to base lines. The linear calibration method of signal curves was performed to move base lines to zero and subtract the reference curve, shown in Fig. 8b. Until now, we have designed, fabricated and tested 15-spot, 27-spot, 45-spot, 96-spot and 144-spot sensor chips. Conclusion In this chapter, a home-developed Reference SPR Analytical System has been demonstrated, in which the successful detection of the SARS-CoV antigens has been recorded by the measurement mode of fixed angle in which Protein A has been used to obtain a higher sensitivity. Then an improved reference SPR system, high throughput, multi-analyte imaging SPR (HMI-SPR) analytical system, was briefly illuminated by using a 5-spot sensor chip to detect two kinds of Immunoglobulin G. By using a reference area, the performance of the SPR system has been improved to avoid the refractivity inference of different solutions, different temperatures, and even to eliminate the disturbances due to non-specific absorptions. Therefore, the home-developed Reference SPR Analytical System is promising to implement clinic diagnosis by directly detecting the crude patients' samples without pretreatment procedures. (a) (b) Fig. 8. SPR response curves for qualitative detections (a) before and (b) after data processed. Curves 1 and 2 for immobilization of rabbit IgG, Curves 4 and 5 for immobilization of human IgG, Curve 3 for reference.
2018-12-02T01:51:11.709Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "ec3ad0de4291424361a414cd0f94a15d6d38c9d8", "oa_license": "CCBYNCSA", "oa_url": "https://www.intechopen.com/citation-pdf-url/6920", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bf5dbd32ff6ef3629ad14c9663611edffda8cbb8", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
242577131
pes2o/s2orc
v3-fos-license
Diabetes and correlates of cardiovascular diseases among women aged 15-49 years in Benin: evidence from a Demographic and Health Survey Background: Sub-Saharan Africa (SSA) countries are facing an epidemiological shift from infectious diseases to chronic diseases, such as cardiovascular diseases (CVDs). The burden of CVDs in a population results from the prevalence of several factors. This study was to determine the association of diabetes and correlates with heart and lung diseases. Methods: We used Benin Demographic and Health Survey (BDHS) population-based cross-sectional data. BDHS 2017-18 is the fth of its kind. A total of 7712 women of reproductive age were included in this study. Heart and lung diseases were the outcome variables. Percentage and logistic regression model were used to analyze the data. The level of statistical signicance was set at 5%. Results: The prevalence of heart disease was 1.3% (95%CI: 1.0%-1.7%) and lung disease was approximately 1.5% (95%CI: 1.2%-1.9%). Women who had diabetes were also found to be 3.57 times signicantly more likely to have heart disease when compared with those who do not have diabetes (AOR= 3.57; 95%CI: 1.51 – 8.45). Furthermore, women with diabetes were 4.55 times signicantly more likely to also have lung diseases when compared with those who do not have diabetes (AOR= 4.55; 95%CI: 2.06 – 10.06). Women who had hypertension were found to be 3.18 times signicantly more likely to have heart disease when compared with those who had no hypertension (adjusted odds ratio (AOR) = 3.18; 95%CI: 2.02 – 4.98). Conclusion: Regardless of this burden, the sub-region is now witnessing a rapid epidemiological evolution which is characterized by a move from dominance of communicable diseases to non-communicable ones, just as its being observed in many other low and middle-income economies around the globe (13). Nevertheless, there is a growing research indicating that disease conditions such as diabetes (14,15) lung-related and respiratory diseases (16), and other CVDs (17,18), have now become serious disease-burden in many SSA countries. Risk factors for lung and heart diseases' surveillance in SSA for the past three to four decades show that majority of young adults are exposed to one or more NCDs risk factors. These risk factors include tobacco and excessive alcohol use, poor diet, sedentary lifestyle, hypertension as well as overweight or obesity (19). In the SDGs, the challenges of the NCDs across the globe were put into consideration. The document targets the reduction by 30% by the year 2030, including all premature deaths caused by the foremost NCDs such as lungs and heart-related diseases. It also targets the promotion of general wellbeing and mental health, as enshrined in the SDGs goal 3 (11,20). This is also one of the targets of the Global NCDs Action Plan 2013-2020 (21). The achievement of these targets by any of the countries in SSA, such as Benin Republic, will be dependent on a total overhauling of the health systems by making sure all the bottlenecks such as underfunding, poor and inadequate training and poor equipment of the health sector, are arrested. The majority of the health systems in SSA are fragmented, under-funded, fragile and infrastructurally limited to handle this ever-increasing communicable and non-communicable disease burden (22,23). In light of the above, we undertook this study to examine the association between diabetes and correlates of cardiovascular diseases among women aged 15-49 years in Benin. Data source We used BDHS population-based cross-sectional data. BDHS 2017-18 is the fth of its kind. A total of 7712 women of reproductive age were included in this study. BDHS used a strati ed multi-stage cluster random sampling technique for the data collection. Data was collected on vital reproductive health issues via structured interviewer-administered questionnaires. DHS program was established by the United States Agency for International Development (USAID) in 1984. It was designed as a follow-up to the World Fertility Survey and the Contraceptive Prevalence Survey projects. It was rst awarded in 1984 to Westinghouse Health Systems (which subsequently evolved into part of OCR Macro). The project has been implemented in overlapping ve-year phases; DHS-I ran from 1984 to1990; DHS-II from 1988 to1993; and DHS-III from 1992 to1998. In 1997, DHS was folded into the new multi-project MEASURE program as MEASURE DHS+. Since 1984, more than 130 nationally representative household-based surveys have been completed under the DHS project in about 70 countries. Many of the countries have conducted multiple DHS to establish trend data that enable them to gauge progress in their programs. Countries that participate in the DHS program are primarily countries that receive USAID assistance; however, several non-USAID supported countries have participated with funding from other donors such as UNICEF, UNFPA or the World Bank. DHS are designed to collect data on fertility and reproductive health, child health, family planning and HIV/AIDS. Due to the subject matter of the survey, women of reproductive age are the main focus of the survey. Women eligible for an individual interview are identi ed through the households selected in the sample. Therefore, all DHS surveys utilize a minimum of two questionnaires-a Household Questionnaire and a Women's Questionnaire. DHS data is publicly available and can be accessed from the MEASURE DHS database at http://dhsprogram.com/data/available-datasets.cfm. DHS are usually implemented by the National Population Commission (NPC) with nancial and technical assistance by ICF International provisioned through the USAID-funded MEASURE DHS program. DHS involved multi-stage strati ed cluster design based on a list of enumeration areas (EAs), which are systematically selected units from localities and constitute the Local Government Areas (LGAs). Details of the sampling procedure have been reported previously (24). Geography of Benin The country spans from north to south and indeed a long stretched country in West Africa, which is The chronic diseases were measured dichotomously (yes vs. no) as reported by the women; "In the past 12 months, ever told has heart disease" and "ever diagnosed with lung disease by doctor or nurse". Explanatory variables Hypertension and diabetes were measure dichotomously (yes vs. no); usage of tobacco products/cigarette (use vs. not use); ever used anything to delay getting pregnant (yes vs. no); parity: 1-2/3-4/>4/no birth; total lifetime number of sex partners: Alibori/Atacora/Atlantique/Borgou/Collines/Couffo/Donga/Littoral/Mono/ Quémé/Plateau and Zou; place of residence: urban/rural; marital status: not married/currently married or living with a partner/formerly married; maternal education: no formal education/primary/secondary+; participation in the labour force: working vs. not working; covered by health insurance: covered vs. not covered. Household wealth quintile: principal components analysis (PCA) was used to assign the wealth indicator weights. This procedure assigned scores and standardized the wealth indicator variables such as; bicycle, motorcycle/scooter, car/truck, main oor material, main wall material, main roof material, sanitation facilities, water source, radio, television, electricity, refrigerator, cooking fuel, furniture, number of persons per room. The factor coe cient scores (factor loadings) and z-scores were calculated. For each household, the indicator values were multiplied by the loadings and summed to produce the household's wealth index value. The standardized z-score was used to disentangle the overall assigned scores to the poor/middle/rich categories (27,28). Ethical consideration We utilized population-based secondary datasets available in the public domain/ online with all identifier information removed. The authors were granted access to use the data by MEASURE DHS/ICF International. DHS Program is consistent with the standards for ensuring the protection of respondents' privacy. ICF International ensures that the survey complies with the U.S. Department of Health and Human Services regulations for the respect of human subjects. No further approval was required for this study. More details about data and ethical standards are available at http://goo.gl/ny8T6X. Analytical approach We used the built-in survey command of Stata for all analyses to account for the sampling strata, primary sampling unit, and sampling weight provided in the dataset. Prevalence of the heart and lung diseases were reported in percentages. The correlation matrix was used to conduct multicollinearity diagnostics to examine the interdependence between explanatory variables using a cut-off minimum of 0.8 known to cause concerns in multicollinearity (29). All signi cant variables in the unadjusted logistic regression model were retained in the model due to lack of collinearity. Multivariable binary logistic regression was used to analyze the data. The level of statistical signi cance was set at 5%. All data analyses were conducted using Stata 14.0 (StataCorp, College Station, Texas, United States of America). Results The prevalence of heart disease was 1.3% (95%CI: 1.0%-1.7%). Results from Table 2 showed 4.7% and 8.4% of women, who had hypertension and diabetes respectively, also had heart diseases, in contrast to 1.0% and 1.2% of those who do not have hypertension and diabetes but have heart disease respectively. Women who had hypertension were found to be 3.18 times more likely to have heart diseases when compared with those who had no hypertension (AOR = 3.18 95%CI: 2.02-4.98). Those who had diabetes were also found to be 3.57 times more likely to have heart disease when compared with those who do not have diabetes (AOR = 3.57; 95%CI: 1.51-8.45). Furthermore, women aged 30-34 years were 3.49 times as likely to have heart disease when compared with women aged 15-19 years (AOR = 3.49; 95%CI: 1.18, 10.31). The geographical region was also signi cantly associated with heart disease (See Table 1 for the details). Table 2 for the details). Discussion This study is the rst of its kind in the provision of a nationwide report of the prevalence of heart and lung diseases among women of reproductive age in the Republic of Benin, West Africa. Some studies from other SSA countries have reported ndings similar to the one reported here. However, no report has been made on the status of heart and lung diseases and their relationship with hypertension and diabetes among reproductive-age women in Benin Republic. The prevalence of lung and heart diseases as observed in this study were approximately 2% each. This is similar to the 2.4% reported in Yaounde, Cameroon (30), but lower than 20.2% reported in urban and rural Uganda (31) and 17.8% reported in Abeshge District of Ethiopia (32). Reports by several researchers show that the prevalence of chronic obstructive pulmonary disease, which is the third leading cause of mortality caused by non-communicable diseases globally (10), in SSA ranges from 4.1% − 22.2% (33). The difference observed in this study may be interpreted to be due to the fact that this study utilizes a national data and in this case, only the known cases of lung and heart diseases are reported as the interviewers did not screen the participants to clearly determine those that have lung and hearts diseases, but depended on the medical history supplied to them by the participants. Therefore, there may be some study participants who may have lung or heart disease but because she has not been diagnosed, would have no clue that she has the disease. This study shows that 4.7% and 8.4% of women, who had hypertension and diabetes respectively, also were with known heart diseases, in comparison to 1.0% and 1.2% of those who have heart disease but do not have hypertension and diabetes respectively. Our study observed an association between hypertension and diabetes, and heart diseases. Hypertension and diabetes were risk factors for heart disease. This association is very much in agreement with several studies ndings across SSA countries, which have also come up with reports that hypertension or/and diabetes are risk factors of CVDs; such as Nigeria (34), Cameroon (35,36), Benin (37), and 2010 Global Burden of Diseases report on CVDs in SSA (18). Mandi et al, also reported that hypertension was the most prevalent risk factor of CVD in rural Burkina Faso (38). More so, a four-country SSA cross-sectional study reported that 50.0% of the people with hypertension that participated in the study were unaware of being hypertensive (39) This implies that the menace of hypertension may be much more that is currently known among SSA population, as some studies have also suggested (40)(41)(42). This therefore, suggests that a large proportion of the region's hypertension cases remain undiagnosed, untreated, or inadequately treated, hence may be the highest contributor for morbidity and mortality caused by complications of CVD (42)(43)(44). Over the past decades, in SSA, the attention of governments and funding agencies have been directed to the ght against communicable diseases such as HIV/AIDS, Tuberculosis, Malaria, with NCDs being neglected or relegated to the background (3,4). The burden of hypertension is such a public health concern as reports have it that it is the single leading cause of death and hospitalization globally (41,(45)(46)(47). The current trends in globalization may have contributed to the rise in the cases of non-communicable diseases in many low and middle-income countries such as SSA countries as lifestyles and dietary habits' changes have occurred among people in low and middle-income economies (19). Our study also found an association between diabetes and lung diseases among the studied participant. Approximately 9.6% of those with known lung disease also have diabetes compared with 1.6% who had lung diseases without elevated blood glucose. Several other reports have shown that diabetes is a comorbidity with lung diseases (48)(49)(50)(51)(52)(53)(54). The relationship between lung diseases and diabetes has been extensively studied, though the mechanism with which this association exist has not been well understood. Explanations have been offered on the possible mechanisms; that hyperglycemic effects on the physiologic status of the lungs, in ammatory responses or the lungs susceptibility to infections may be a signi cant contributor to this association (55,56), another possible mechanism is offered by Zheng et al, which attributed the association to a sustained diabetes level resulting in oxidative stress (OS) thereby causing tissue damage (57). Aside from these possibilities, lifestyles such as tobacco use, sedentary life, physical inactivity, air pollution as well as age have been implicated as possible risk factors for both heart and lung diseases (37,51,(58)(59)(60). This study observed that geographical departments and age of the participants are determining factors for heart and lung diseases. Benin Republic has twelve departments with three of them being essentially urban; Oueme, Littoral and Atlantic, another three are essentially rural; Atakora, Borgou and Zou, while the rest are essentially semi-urban (61,62). We observed that participants from the rural departments were found to be less likely to have heart and lung diseases compared with participants from the essentially urban departments. This nding is in corroboration with the report from South Africa; it was reported that the most developed areas in the study recorded higher heart and lungs diseases (63). Another study has reported that the burden of heart-related diseases was in the urban areas and the densely populated part of the city (60). The study also revealed that the majority of the CVDs were in the elite and middle-class neighborhoods. Therefore, CVD was predominantly high in rich environment, while generally low in the middle-income and more rural/urban sprawl neighborhoods (60). Two South African studies also are in agreement that heart diseases are higher in urban areas than in rural areas (64,65). The high prevalence observed among participants from urban departments compared with those from the essentially rural departments can be explained in the view of industrialization and urbanization (urban cities come with much day-to-day hustles to catch up with day's activities, stress from tra cs, and vehicular as well as industrial pollutions). People in urban cities are also prone to poor dietary lifestyle as majority often depend on junk foods, sedentary lifestyle or physical inactivity due to o ce work, tobacco use, compared with rural dwellers. Our study revealed that women from the mostly urban departments were more likely to have heart and lung diseases compared with those in the rural departments. Strengths and limitations The major strength of this study is the use of nationally representative data and the ndings are generalizable for the women of reproductive age in Benin, West Africa. However, only an association of the factors and not causation can be inferred due to the cross-sectional nature of the data. Also, we were unable to explore other contributory risk factors of heart and lung diseases such as overweight/obesity, salt intake, psychosocial stress, and other endogenous factors. Conclusion We found that hypertension was associated with heart disease among women aged 15-49 years in Benin. Health policymakers and government need to focus on widespread prevention and control interventions of heart and lung diseases through improved screening for risk factors and early detection
2020-08-20T10:12:37.074Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "c4e810d23b49ec92a083eb931eb201f9f29b9e2d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-51774/latest.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e846c8e1fe6d1c11ca0aff872cf543b9596cd2ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246240318
pes2o/s2orc
v3-fos-license
Valid belief updates for prequentially additive loss functions arising in Semi-Modular Inference Model-based Bayesian evidence combination leads to models with multiple parameteric modules. In this setting the effects of model misspecification in one of the modules may in some cases be ameliorated by cutting the flow of information from the misspecified module. Semi-Modular Inference (SMI) is a framework allowing partial cuts which modulate but do not completely cut the flow of information between modules. We show that SMI is part of a family of inference procedures which implement partial cuts. It has been shown that additive losses determine an optimal, valid and order-coherent belief update. The losses which arise in Cut models and SMI are not additive. However, like the prequential score function, they have a kind of prequential additivity which we define. We show that prequential additivity is sufficient to determine the optimal valid and order-coherent belief update and that this belief update coincides with the belief update in each of our SMI schemes. Introduction Bayesian analysis integrates different sources of information or "modules" into a single analysis through Bayes theorem and quantifies uncertainties in parameters. The information in each module, which may be prior belief or observations or both, is encoded as a parametric model. Evidence synthesis can give better predictability, more precise estimation, and access to shared parameter estimation (Ades and Sutton, 2006;Sweeting et al., 2009;Harris et al., 2012;Fithian et al., 2015;Pacifici et al., 2017;Nicholson et al., 2021). As modules are added to an overall model, there is an increasing hazard for misspecification. Methods that help us carry out Bayesian analysis on misspecified models have been in development for some time without explicit consideration of modularisation. We divide these into three classes. Methods which temper the likelihood lead to power posteriors, Walker and Hjort (2001); Grünwald (2012); Miller and Dunson (2018), methods which use bootstrapping in a Bayesian setting, including Weighted likelihood Bootstrap (Newton, 1991;Newton and Raftery, 1994;Lyddon et al., 2019), the Posterior Bootstrap (Pompe and Jacob, 2021) and BayesBag (Bühlmann, 2014;Huggins and Miller, 2021), and methods which replace the likelihood with some more general loss function mediating data and parameter, including PAC-Bayes (Germain et al., 2016;Zhang, 2006;McAllester, 1998;Shawe-Taylor and Williamson, 1997), Gibbs posteriors (Zhang, 2006;Jiang and Tanner, 2008) and Generalized Bayes (Bissiri et al., 2016;Grünwald and van Ommen, 2017) are relevant in the multi-modular setting. Multi-modular Bayesian inference with misspecified models has some features which distinguish it from misspecification in single module settings. Liu et al. (2009) gave an early "modularization" analysis. Markov melding (Goudie et al., 2019) and Bayesian melding (Poole and Raftery, 2000) can be characterised as dealing with priors which conflict across modules. In our own work we assume that modules have been identified as either misspecified or well-specified. This may be the conclusion of a first stage Bayesian analysis of the overall multi-modular model. A typical and well-founded objective is to estimate the parameters of a well-specified module making careful use of information from misspecified modules. Cut-model inference (Plummer, 2015) has proven very effective in this setting. We discuss this in detail below. It can be thought of as a kind of sequential imputation procedure, in which the distribution of a shared parameter is imputed form the information in one module and then passed on as a kind of prior for the shared parameter in a second module. This is not Bayesian inference, as information from the second module does not inform the shared parameter. A early form of Cut model inference has been available in WinBUGS (Spiegelhalter et al., 2014) for some time. Cut models have found many applications: air pollution (Blangiardo et al., 2011), epidemiological models (Maucort-Boulch et al., 2008a;Finucane et al., 2016;Li et al., 2017;Nicholson et al., 2021;Teh et al., 2021), meta-analysis (Lunn et al., 2009(Lunn et al., , 2013Kaizar, 2015) and propensity scores (Zigler et al., 2013;Zigler and Dominici, 2014). Jacob et al. (2017) gives an overview of modularized Bayesian analysis including Cut-models from the perspective of statistical decision theory and Pompe and Jacob (2021) gives asymptotic properties. Nested MCMC (Plummer, 2015) is commonly used for fitting Cut models. New developments include variational approximation (Yu et al., 2021) and a computationally efficient variant of nested-MCMC (Liu and Goudie, 2020). In Cut-model inference, feedback from the suspect module is completely cut. However, there may be a bias-variance trade-off: if the parameters of a well-specified module are poorly informed by "local" information then limited information from misspecified modules may allow us to bring the uncertainty down without introducing significant bias. Semi-Modular Inference (η-SMI, Carmona and Nicholls (2020)) generalises Cutmodel inference as it offers a means by which we can control the influence of suspect modules on the fit for a good module. Candidate posterior distributions are indexed by an associated influence parameter η. At η = 1 η-SMI is standard Bayesian inference and at η = 0 η-SMI reproduces the Cut-model. Carmona and Nicholls (2020) suggest choosing η maximizing the expected log pointwise predictive density (ELPD) though this choice is not an essential part of their method and other criteria (Wu and Martin, 2020) may be more appropriate in different settings. Liu and Goudie (2021) adapt η-SMI for Geographically Weighted Regression using an influence parameter across like-lihood factors which is modeled as a function of distance between the spatial observation locations. Many of the papers cited up to this point propose probability distributions which can be seen as alternative posteriors, in the sense that they offer quantification of uncertainty. Bissiri et al. (2016) call these alternative data-informed mappings "belief updates" and characterise the optimal, valid and order-coherent belief update as a Gibbs posterior in a Generalised Bayes setting. Several existing belief updates, such as the power posterior, are known to be valid. However the characterisation of a valid belief update holds for losses which are additive. We extend the class of losses for which valid belief updates can be defined. In particular we show the Cut-models and η-SMI have a kind of prequential additivity which is sufficient for the theory of Bissiri et al. (2016) to apply. We point out alternative SMI procedures, which we call δ-SMI and γ-SMI. These offer different interpolating sequences of candidate posterior distributions. The δ-SMI sequence progressively "blurs" the data away using a procedure resembling Approximate Bayesian Computation (ABC) but otherwise offers candidate posterior distributions which are often similar to those of η-SMI. Liu and Goudie (2021) consider a parallel framework for deriving belief updates due to Zhang (2006). This resembles PAC-Bayesian approaches (McAllester, 1998;Shawe-Taylor and Williamson, 1997) and is distinct from the approach of Bissiri et al. (2016). This paper has three main parts. In the first part (Section 2) we show how prequential additivity leads to valid updates of belief and show that the Cut model is a valid belief update. In the second part (Sections 3 and 4) we introduce some SMI-variants and consider their properties. We use the theory from the first part to show they are valid belief updates. In the third part (Section 5) we give some simple examples to explore the behavior of one of the SMI-variants introduced in the second part. Consider the two-module configuration of Fig. 1. Let Z = (Z 1 , ..., Z m ) and Y = (Y 1 , ..., Y n ) be two vectors of data with model parameter vectors ϕ and θ. In our notation below we take the sample spaces to be Z j ∈ R dz , j = 1, ..., m, Y i ∈ R dy , i = 1, ..., n, ϕ ∈ R pϕ and θ ∈ R p θ though this is not an essential restriction and our final example takes discrete data. In Generalised Bayesian inference with a Gibbs posterior (Chernozhukov and Hong, 2003;Zhang, 2006;Jiang and Tanner, 2008;Bissiri et al., 2016) we have a prior π 0 (θ, ϕ) (a density here) and a loss l(ϕ, θ; Y, Z) connecting the data and parameters, measuring how well the parameters agree with the data. A belief update ψ is a rule which updates the prior distribution, taking into account the data through the loss. It determines an updated belief distributionp. When we are choosing between different belief updates we refer to these as "candidate posteriors". For the model in Fig. 1, we writẽ (2.1) When we specify a probability distribution from a loss, via a belief update, we write it as if it is a conditional probability density. We use p() (for densities over data) and π() (for densities over parameters) when they may be understood as a conditional probability. However, many belief updates, like the Cut model below, do not yield conditional probability distributions. We writep() for probability distributions of this kind. In Generalised Bayes the belief update from the prior to the posterior is For example, in Bayesian inference with observation models p(Z|ϕ) and p(Y |ϕ, θ) (probability densities, say) the posterior distribution of (ϕ, θ) is and so the loss in (2.2) "must have been" the negative log-likelihood, In this paper we follow Bissiri et al. (2016) and ask why (2.2) is a valid belief update of π 0 in the context of Cut-model inference (Plummer, 2015) and in related forms of Semi-Modular Inference (SMI, Carmona and Nicholls (2020)). We identify a feature of this setup which does not seem to have been considered explicitly to date: the loss itself may depend on our state of knowledge of the relation between parameters, for example, on π 0 (θ|ϕ). This is present in the Gibbs posterior for the Cut model. The Cut "posterior" for (ϕ, θ) is (Plummer, 2015) (2.5) The Cut is indicated in Figure 1 by the vertical dashed line. It is called a "Cut model" because the flow of information from the Y -module into the Z-module has been cut. The flow of information between modules of a Cut model is asymmetrical. This makes sense when the generative model for the data Y is misspecified, but the generative model for the Z-module is correct. The idea is to infer or "impute" ϕ using the reliable Z-model and stop misspecification in the Y -model from biasing that analysis. The Cut model above can be written in terms of the Bayes posterior In our setting the likelihoods p(Y |varphi, θ) and p(Z|ϕ) can be easily evaluated, but p(Y |ϕ) cannot. If the Cut model is a belief update with a Gibbs posterior, then the loss in (2.2) yielding (2.5) "must have been" The loss function for the Cut model depends on the prior π 0 (θ|ϕ) through the term log p(Y | ϕ). In the setting of Bissiri and Walker (2012) this prior dependence could be thought of as another "piece of information" alongside Y, Z. They write the loss l(ξ; I) where ξ = (ϕ, θ) is the parameter and I = (Y, Z) is the data or "information" informing the parameters. We recover that setup, at least formally, if we write I = (Y, Z, π 0 ). However, Bissiri et al. (2016) determine valid belief updates for additive losses only (see below), and we will see that the Cut-model loss is not additive, so we can ask, what is the valid belief update for the Cut-model loss? Does it coincide with the Cut-model posterior? The Bayes loss l (b) in (2.4) is additive for iid data. In contrast, the Cut loss in (2.8) is not additive over k as it depends on the marginal p(Y |ϕ) in (2.7). However, the Cut loss depends on the evolving state of knowledge and this needs to be accounted for in the accumulated loss. The "prequential score" for prediction (Dawid and Musio (2015), Section 4) has a similar dependence on an evolving predictive distribution and so we call this property prequential additivity. Definition 2. (Prequential Additivity) Let a belief update ψ (q) be given. For k = 1, ..., K letq k (ϕ, θ) = ψ (q) (l(ϕ, θ; Y (1:k) , Z (1:k) , π 0 ), π 0 ) (2.9) be the belief distribution for ϕ and θ after the arrival of the first k sets of data Y (1:k) , Z (1:k) . Letq 0 = π 0 . The loss l is prequentially additive with respect to the belief update ψ (q) if the total accumulated loss over a sequence of measurements (Y (k) , Z (k) ), k = 1, ..., K is equal to the loss from a single bulk measurement, This is a condition on a predefined loss, not the definition of the total loss as is the case for the prequential score, so it will only hold if there is a relation between the loss l and the belief update ψ (q) . A loss which does not depend on the prior and is additive is clearly prequentially additive. However, for example, the Cut-model loss is prequentially additive but not additive. Proposition 2.0.1. The Cut-model loss l (c) (ϕ, θ; Y, Z, π 0 ) in (2.8) is prequentially additive with respect to the belief update which is just the Cut-model posterior. Proof. see Appendix A1.1. The proof is closely related to the proof of order-coherence of Cut-model inference given in Carmona and Nicholls (2020). Order-coherence and valid belief updates In this section we show that the conclusions of Bissiri et al. (2016) extend to cover prequentially additive losses. We need this extension because prequential additivity is a weaker condition than the assumed additivity. We begin by defining order-coherence. Consider a general partition of the data (Y, Z) into K = 2 arbitrary subsets, as in the previous section, with Y (1:2) = (Y (1) , Y (2) ) and Z (1:2) = (Z (1) , Z (2) ). A belief update ψ is order-coherent in the sense of Bissiri et al. (2016) if the posterior for independent data is the same regardless of whether we update belief from the prior, taking all the data in one tranche, or update with Y (1) , Z (1) and use the resulting posterior as the prior for a belief update with Y (2) , Z (2) . In our setting with prior-dependence in the loss function we have the following definition. Definition 3. (order-coherence) Let a belief update ψ (q) be given and let for every n, m > 0 and every partition of the data taken in any order. The property is defined for K = 2 as it will hold for sequential belief updates along partitions of the data into K > 2 sets if it holds for K = 2. Order-coherence seems to us an axiomatic property for a belief update. Bissiri et al. (2016) show that the optimal, valid and order-coherent belief update ψ is the probability measure ν(dθ dϕ) minimising the loss L(ν; Y, Z, π 0 ) = l(ϕ, θ; Y, Z, π 0 )ν(dθ dϕ) + KL(ν||π 0 ) (2.13) over ν ∈ F where F is the family measures, absolutely continuous with respect to the measure of π 0 , for which E ν (l(ϕ, θ; Y, Z, π 0 )) is finite, that is ψ{l(ϕ, θ; Y, Z, π 0 ), π 0 } = arg min ν L(ν; Y, Z, π 0 ). (2.14) They first show that a valid belief update should minimise an overall loss L of the form L = E ν (l) + D(ν, π 0 ), where the second term is a measure D(ν, π 0 ) of divergence between prior and ν. For our purposes this actually defines what we mean by a "valid" belief update. Bissiri and Walker (2012) and Bissiri et al. (2016) show that if the loss l is additive, and the belief update ψ determined by (2.14) is required to be order-coherent whatever the prior, parameter space, loss and data it is updating, if L has a unique minimum and D = D g is a g-divergence (see Appendix A1.2) then D g must be the KL-divergence and so a valid coherent belief update must minimise (2.13). An optimal belief update minimising (2.13) exists when E π0 (exp(−l(ϕ, θ; Y, Z, π 0 ))) exists, and if this holds then the optimal valid and coherent belief update is the proper Gibbs posterior in (2.2). The result of Bissiri et al. (2016) justifies the belief update in (2.2) for an additive loss l(ϕ, θ; Y, Z). Theorem 2.1 below extends this to prequentially additive losses. Theorem 2.1. If a loss l is prequentially additive with respect to the belief update given by the Gibbs posterior, then ψ (q) is order-coherent. It further holds that L(ν; Y, Z, π 0 ) in (2.13) is a valid loss yielding an order-coherent belief update and ψ (q) itself is the optimal valid order-coherent belief update ψ in (2.14). Having a prior-dependent loss gives the discussion of valid belief updates a circular feeling. Prequential additivity replaces additivity to determine (with coherence) a unique valid belief update. However, prequential additivity depends for its definition on some predefined rule ψ (q) for updating belief from π 0 toq 1 and so on. The question remaining is whether prequential additivity and the coherence requirement are enough to impose a unique valid belief update, and whether that valid belief update coincides with the belief update ψ (q) which ensured the loss was prequentially additive. We consider the Cut model as a first example of how this may be used to show the validity of a given belief update. Corollary 2.1.1. The Cut-model belief update defined in (2.5) is the optimal, valid and coherent belief update for the loss in (2.8). Proof. It is sufficient by Theorem 2.1 that the loss (2.8) is prequentially additive with respect to the belief update (2.5). This follows from Proposition 2.0.1. Semi-Modular Inference Having established the Gibbs posterior as the valid belief update for the Cut-model loss, we now point to some other related belief updates for prequentially additive losses. These are variants of η-SMI, a family of belief updates introduced in Carmona and Nicholls (2020). We define three families of candidate posterior distributions interpolating between the full-Bayes posterior (2.3) and the Cut-model posterior in (2.5). The idea here, following Carmona and Nicholls (2020), is to provide modulated input to the ϕ inference from the (ϕ, θ, Y )-module. In the next section we motivate this step in a bit more detail. Plummer (2015) explains that the Cut-model approach to inference using (2.5) is Bayesian Multiple Imputation (BMI), in essence a two-stage process: at the imputation stage the posterior distribution π(ϕ | Z) of ϕ is imputed from the data Z as if ϕ were missing data; in the analysis stage the posterior distribution π(θ|Y, ϕ) of θ is conditioned on the imputed ϕ so that uncertainty in ϕ is fed through into the distribution of θ. The Cut model and Bayesian Multiple Imputation Bayesian inference (2.3) can also be given formally as a two stage imputation/analysis procedure, in the imputation stage, with p(Y |ϕ) the marginal likelihood in (2.7). If we did carry out Bayesian inference in this way, we would use the same model, p(Y |ϕ, θ), in both π(ϕ | Y, Z) (imputation) and π(θ | Y, ϕ) (analysis). This is an imputation scheme Meng (1994) calls "congenial", where it is appropriate for the imputation and analysis to be carried out using the same model. In Cut-model inference the imputation and analysis use different models for ϕ, as p(Y |ϕ, θ) is not used in the imputation. This may help in what Meng (1994) calls "uncongenial" problems. One negative feature of the Cut model is that it may remove too much information from the imputation for ϕ. This will often increase the posterior variance of ϕ and θ. In the context of hypothesis tests based on classical multiple imputation of missing data, Knuiman et al. (1998) refer to this as "dilution" off the effect due to "imputation noise". We may be happy to accept some dilution, if the bias due to misspecification is substantial. However if the (ϕ, θ, Y ) module is only weakly misspecified, we may see a large increase in variance for just a small bias. Semi-Modular inference and Tempered SMI The γ-SMI posterior family of candidate posteriors simply tempers from the Cut (at γ = 0) to full-Bayes (at γ = 1) viã using (2.6) for the last line. The loss function for which it is a Gibbs posterior is The p(Y | ϕ) term is the loss-function weighting that down-weights the influence of Y on ϕ. We show in Section A1.3 that this loss is prequentially additive with respect to the belief update in (3.2). It follows from Theorem 2.1 thatp 2) is the optimal, valid and coherent belief update for the loss in (3.4). The γ-SMI posterior in (3.2) is attractive as a formally straightforward family of candidate posteriors encompassing Cut models and Bayesian inference. However it is very awkward computationally and in fact we have no idea how to implement it in practice. We now give two alternative interpolating sequences of candidate posterior distributions. The first is η-SMI, given in Carmona and Nicholls (2020). We begin by introducing an auxiliary parameterθ, expanding the model parameters from (ϕ, θ) to (ϕ,θ, θ). The η-SMI posterior is (3.6) Several authors (for example, Miller and Dunson (2018)) observe that p(Y | ϕ,θ) η is not a normalised probability density in Y . The power posterior is not simply a posterior distribution with an extra parameter η. We are interested in the marginal belief update for θ and ϕ, which is The tempering or γ-SMI posterior in (3.2) can be written in a similar waỹ so the order of raising the power and marginalising is swapped. (3.7) and compare with Equations (2.5) and (3.1)). The loss function for which the η-SMI family of belief updates are Gibbs posteriors is We show in Section A1.3 that this loss is prequentially additive with respect to its Gibbs posterior, so that belief update is again the optimal, valid and coherent belief update. Kernel-Smoothing δ-SMI The third interpolating sequence of candidate distributions we describe is constructed by taking a different relaxation of the likelihood. For y,ỹ ∈ R let K δ (y,ỹ) be a normalised kernel. We focus on the cases K δ (y,ỹ) = N (y −ỹ; 0, δ 2 ) and K δ = (2δ) −1 I |y−ỹ|<δ . For y,ỹ ∈ R n we define We define the δ-SMI posterior as We show in Section 4.1 that the δ-SMI family of belief updates defined in (3.14) are valid for the loss, Interpretation of δ-SMI as a generalised Cut model is a conditional probability (and so we write π (k) δ here). However, δ-SMI as a whole is not simply Bayesian inference with some simple model elaboration. The joint δ-SMI posterior is in fact a cut model for an enlarged model with three modules. The three data sets are Y, Z and Y = Y , the new copy of Y present in the imputation stage for ϕ. The generative models for these three modules are (ϕ,θ, Y ) ∼ π(ϕ,θ)p δ (Y |ϕ,θ), (ϕ, Z) ∼ π(ϕ)p(Z|ϕ) and (ϕ, θ, Y ) ∼ π(ϕ, θ)p(Y |ϕ, θ); the feedback from the final ϕ, θ, Y module into the ϕ,θ, Y and ϕ, Z modules has been cut. The posterior for the imputation stage is π (k) δ (ϕ,θ|Y , Z) (with Y = Y ) and the posterior for the analysis stage is π(θ|Y, ϕ). This Cut-model interpretation does not hold for η-SMI, asp (s) η (ϕ,θ|Y, Z) is not a posterior defined by Bayes rule, as p(Y |ϕ,θ) η is not a normalised probability density. Comparison with η-SMI We can display the relation between the marginal δ-SMI posterior and the marginal η-SMI posterior. The marginal δ-SMI posterior can be writteñ so the δ-SMI posterior looks like the η-SMI posterior in (3.7), with prior expectation of the down-weighted likelihood p δ (Y |ϕ,θ) for the former and p(Y |ϕ,θ) η in the later. δ-SMI interpolation of Bayes and Cut Like η-SMI, the family of distributions indexed by δ interpolates between the Cut model and the Bayes posterior. The conditions on the kernel smoothed likelihood in Proposition 3.0.1 restrict the choice of kernel K δ in (3.10). They are easily satisfied. For example, if the kernel K δ is the top hat kernel and Y 1 , ..., Y n have a continuous density p(Y i |θ, ϕ) then under the integral in (3.11), we have K δ (y, dy ) → δ y (dy ) (the Dirac delta-function) as δ → 0 in the sense of a distribution, and p δ (Y |θ, ϕ) → p(Y |θ, ϕ). If Y is discrete then for all sufficiently small δ, the set {Y : for all sufficiently small δ. Condition (3.18) also holds for the top-hat kernel. For example, for continuous real scalar data and i = 1, ..., n, p δ (Y i |ϕ,θ) = (1 − δ (Y i ))/(2δ) for some δ (Y i ) → 0 with δ → ∞ for any fixed data value Y i and so ratios tend to one. Carmona and Nicholls (2020) use the nested MCMC algorithm of Plummer (2015) to target the η-SMI posteriorp (s) η (ϕ, θ|Y, Z). Here we show that similar methods can be setup to samplep (k) δ (ϕ, θ|Y, Z). Liu and Goudie (2020) give an efficient approximation scheme which speeds up analysis within the same nested-MCMC framework. Targeting the δ-SMI posterior We may not be able to compute the δ-SMI likelihood to p δ (Y |ϕ,θ). However we can treat the kernel K δ as a probability density over "missing" dataỸ , writing so that the marginal obtained when we integrate overỸ is p δ (Y |ϕ,θ) in (3.11). The extended posterior with auxiliary variables for the missing data is using standard MCMC. Marginally then, We take this simulated ϕ and sample θ|ϕ ∼ π(θ|Y, ϕ) using standard MCMC. This gives a pair (ϕ, θ) distributed according top (k) δ (ϕ, θ|Y, Z). We do not use this Monte Carlo method below. In the main HPV-data example in Section 5.3 below the likelihood p δ (Y |ϕ,θ) is given in terms of the CDF of a Poisson distribution and is readily evaluated. The downside of this approach is that it suffers from "double asymptotics". We run one MCMC chain generating samples from π (k) δ (ϕ|Y, Z). For each sample ϕ output in this run we simulate a chain targeting π(θ|Y, ϕ) and take the last sampled θ-value. This second chain must run to convergence. Whilst in our experience very high accuracy can be achieved in a modest runtime, of the order of ten times the runtime of the chain targeting the Bayes posterior π(ϕ, θ|Y, Z) for the same ESS (Carmona and Nicholls, 2020), this is clearly a weakness of this scheme. It may be preferable to analyse the δ-SMI posterior using the variational framework of Yu et al. (2021) and Carmona and Nicholls (2021). SMI and Bayesian Multiple Imputation Some of the forms of SMI listed above are variants of BMI in which we use information from the Y -module to inform the ϕ imputation. This is the case for η-SMI and δ-SMI. From a BMI perspective these SMI variants are simply trying to make the best possible imputation of ϕ using the available information. The parameters η and δ will be set to values that allow the right amount of information to flow back from (ϕ, θ, Y )-module to influence the ϕ imputation. The choice of these values is discussed in Section 4.3. However, γ-SMI cannot be setup as BMI, at least in any computationally tractable way as it cannot be written as a suitable product of conditional probabilities. Properties of SMI In this section we show that the different forms of SMI we have written down are all valid belief updates. We then give criteria and estimation procedures defining and computing an optimal δ. Carmona and Nicholls (2020) show that η-SMI is order-coherent. The proof that its loss is prequentially additive is based on similar reasoning. We now extend these results to γ-SMI and δ-SMI. Proof. Since the losses are obtained from the corresponding Gibbs posteriors, it is sufficient by Theorem 2.1 that these losses are prequentially additive with respect to their associated belief updates. This follows from Propostion 4.0.1 below. Asymptotic behaviour of δ-SMI In Bayesian inference a family of densities P Ω = {p(·|ϕ, θ) : (ϕ, θ) ∈ Ω} with parameter space Ω is specified for unknown parameters θ, ϕ and belief about the true parameters (θ * , ϕ * ) is updated by the observed data using Bayes' rule. If the model is well specified p * ∈ P Ω , then under regularity conditions, the posterior concentrates at the true parameter values as the number of observations increases. If the parametric model is misspecified p * ∈ P Ω then, under regularity conditions, the posterior concentrates at the pseudo-true parameter values minimizing the Kullback-Leibler divergence between p * and p(·|ϕ, θ) (Berk, 1966). In these settings the Maximum Likelihood Estimator (MLE) is a natural estimator for the parameters minimising the Kullback-Leibler divergence (Akaike, 1973). The pseudo-truth is given by the limiting MLE taken on large data. The asymptotic behaviour of the Bayes posterior distribution for misspecified parametric models is considered in Kleijn and van der Vaart (2012). A covariance matrix guaranteeing the correct asymptotic Frequentist coverage of the pseudo-true parameters was given by Müller (2013). Pompe and Jacob (2021) give asymptotics for the Cut model. Because δ-SMI is a kind of Cut-model inference (recall, the observation model p δ (Y |ϕ,θ) is normalised) that theory applies here. Denote by the pseudo-true values of ϕ,θ and θ and let be the separate MLE's in the imputation and analysis modules. Pompe and Jacob (2021) show that, under regularity conditions, and taking limits in m and n with m/n = α, the cut-MLE's converge as with Σ F a covariance defining asymptotic freqentist coverage of the pseudo-true values. Pompe and Jacob (2021) give this covariance in terms of the model elements. In contrast, if ϕ,θ, θ ∼p for some covariance Σ C . Pompe and Jacob (2021) give Σ C in terms of the Cut-model elements. They show that Σ C = Σ F in general, and so under the stated regularity conditions, the Cut-model posterior concentrates on the pseudo-true values, but does not have correct Frequentist coverage in the limit of large data. Since δ-SMI is a kind of generalised Cut model (strictly a Cut at each δ) the same observations apply. Choosing the influence parameter Having shown how to construct valid candidate posterior distributions for the Cut model and SMI, we select a candidate for downstream inference using an "external" criterion. In this paper we select a candidate posterior by matching the posterior predictive distribution to the true generative distribution of the data. Wu and Martin (2021) take a similar criterion when they select a power in the power posterior. Following Carmona and Nicholls (2020), we consider out-of-sample predictive accuracy of the model as our utility function for meta-parameter selection. Our criterion is the Expected Log Pointwise Predictive Density (ELPD), where p * is the distribution representing the true data-generating process and is a candidate posterior predictive distribution, indexed by δ. We would like to set δ * = arg max δ≥0 ELP D y,z (Y, Z; δ) and select the δ-SMI posterior p (k) δ * for analysis. In general the ELPD must be estimated as p * is unknown. In Section 5.1 (a simple synthetic example) we calculate the ELPD exactly. In Section 5.2 we use LOOCV to estimate the ELPD (for the Z-data alone). In Section 5.3 we use the WAIC to estimate the ELPD for the Y and Z data separately using the methods of Vehtari et al. (2017). There is some freedom in the choice of utility function depending on the inference objective. For example, in Section 5.2 we use the ELPD for the Z data alone as it prioritises ϕ-inference. One weakness of the ELPD is that we often value parameter estimation over predictive performance. It is not clear to us how to answer this issue in general. However, there are settings where one can take a utility which more directly targets parameter estimation. For example, if θ = (θ 1 , ..., θ p ) are model parameters which enter as a priori exchangeable auxiliary variables naturally interpreted as missing data, and the data Y, Z comes with actual observations of a subset (θ 1 , ..., θ d ), 1 ≤ d < p of the ϕ-values, then we may choose δ using LOOCV, treating the observed θ-values as the held-out data. See Carmona et al. (2022) for an example where this approach is used. Examples Here we present three reproducible examples. R code (R Core Team, 2019) reproducing all results below is given in https://github.com/gknicholls/delta-SMI-repository. Simulation study: Biased data This is a simple synthetic example taken from Liu et al. (2009) in which the source of the "misspecification" is a poorly chosen prior. Since there is no misspecification in the observation models the interpolating modelsp (k) δ , δ ≥ 0 (including Cut and Bayes) concentrate, in the limit n → ∞ with m/n constant, on the true parameter values (ϕ * , θ * ). The KL-divergence between p * andp (k) y,z,δ tends to zero and the ELPD converges to a constant p * log(p * )dydz independent of δ. This model was given in Jacob et al. (2017) as an example where Cut model approaches improve on Bayesian inference and analysed in Carmona and Nicholls (2020) as an example of η-SMI. Here we repeat this analysis for our new SMI variants. In this normal setup the three interpolations η-SMI, δ-SMI and γ-SMI are all identical. We may take a fixed value of δ and recover the η-SMI and γ-SMI distributions by setting η = σ 2 y /(σ 2 y + δ 2 ) and γ = σ 2 y δ 2 +σ 2 y + 1. We choose true parameter values in such a way that each dataset offers apparent advantages to estimate ϕ. One dataset is unbiased but has a small sample size, m = 25, whereas the second has an unknown bias but more samples, n = 50, and smaller variance. Suppose the true generative parameters are ϕ * = 0, θ * = 1, and we know σ z = 2 and σ y = 1. We assign a constant prior for ϕ, while θ is subjectively assessed to have a N (0, σ 2 θ ) prior. We are over-optimistic about the size of the bias and set σ θ = 0.33. These choices are all the same as previous authors except that those authors took σ θ = 0.5. Our choice is a little more "extreme". We do this simply to get an example where effects are a bit more visible. We calculate the δ-SMI posterior for a range of δ ∈ [0, ∞]. Picking up from the marginal (3.17) of interest, where the posterior for θ given ϕ is With these expressions we havep (k) δ (ϕ, θ|Y, Z) as a product of normal densities. This may be sampled by simulating at any desired value of δ. We get the Bayes and Cut posteriors by taking the respective limits δ → 0 and δ → ∞. A scatter plot of thep (k) δ posterior at three values of δ = 0, δ * , ∞ is given in Fig. 2. The δ-SMI posterior covers the truth. For ease of visualisation the random number seed was chosen (six attempts) so that the Cut and Bayes posteriors were reasonably well separated, but in other respects this is typical. The δ-SMI posterior does relatively well for recovering the true parameters, though it is chosen by targeting the ELPD. This is not expected, or even desirable, in a misspecified model. However, in this example the observation models are both exactly correct, and the misspecification is in the prior. For further visualisation we plot in Fig. 3 the marginal δ-SMI posteriors for ϕ (top) and θ (bottom) at δ = 0 (Bayes) and δ = ∞ (Cut) together with the selected δ-SMI at δ * , the choice maximising the ELPD. In this example where only the θ-prior is misspecified, Bayes has little overlap on the truth. Cut has reasonable overlap but larger variance, as the Y data do not inform ϕ. The δ-SMI posterior selected using the ELPD has lower variance than the Cut and better location. The data are synthetic, so we estimate the Posterior Mean Squared Errors (PMSE) Ep(k) measuring the dispersion of the selected δ-SMI posterior around the truth, and calculate the posterior predictive distribution for new data and the exact ELPD in Appendix A2.1 using (4.6) and (4.5). This simple example would be quite challenging for Monte-Carlo estimation of ELP D y,z (Y, Z; δ). Referring to Fig. 3 the variation in the ELPD (bottom right panel) with δ is small, so its maximum is hard to locate accurately. In the right column of Fig. 3 we display these metrics for δ ∈ [0, ∞]. The values of PMSE and ELP D y,z (Y, Z; δ) for Bayes and Cut correspond respectively to the values taken by the functions plotted at the left and right edges of the graphs. We see their PMSE's are larger (as we would expect from the marginal posterior densities) and their ELPD-values are lower than those of the δ-SMI posterior. The estimated value of δ * 3.5 in δ-SMI corresponds to η * 0.08 in η-SMI. The scale of the "noise" added to the Y -values looks relatively large compared to their variance σ 2 y = 1. This tells us that the Y -module is misspecified. However, referring to (5.1) we see δ * 2 12 is large relative to σ 2 y + nσ 2 θ = 6.4, so δ-SMI is actually removing information from the θ-prior from the ϕ-imputation. Misspecified Regression model This simple synthetic example illustrates the behavior of the method when the observation model in the Y -module is misspecified. The setup is otherwise similar to the biased-data example. We have a well specified Z-module with a small data set. Interest focuses on estimation of ϕ. We have a second larger data set (the Y module). Standard Bayesian analysis has given us reason to believe the Y -module is misspecified so we cannot estimate θ. However, we will use some information from the Y -module in order to reduce the variance of our ϕ-estimation. We use δ-SMI to control the bias coming from the misspecified Y -module. The model is a regression. Covariates X i ∼ F X , i = 1, ..., n and their sampling distribution F X are known exactly. The fitted models are Y i ∼ N (ϕ + θX i , σ 2 y ), i = 1, ..., n, Z j ∼ N (ϕ, σ 2 z ), j = 1, ..., m. (5. 2) The true parameter values are ϕ * , θ * . The true observation model for Z is the same as the fitted model, with k > 0 a parameter we vary to illustrate different levels of misspecification. The ϕ and θ priors are both flat improper priors. Parameter settings are given in Appendix A2.2. The resulting δ-SMI distributions (once integrated overθ) arẽ The MLE's (4.3) obtained by maximising the likelihoods on each side of the cut coincide with the posterior means above,φ δ =μ ϕ andθ δ =μ θ|φ δ (the MLE's θ δ =θ δ are equal). These converge to the pseudo-true values defined in (4.1) and given here by where M X r = E(X r ), r = 1, 2, ... where X ∼ F X is the scalar covariate (an abuse of notation). Since the posterior means and MLE's coincide, the δ-SMI posterior in (5.4) converges as n → ∞ with α = m/n fixed to concentrate on the pseudo-true values. It is clear from the pseudo-true expressions that σ 2 y + δ 2 balances α = m/n, so larger δ gives smaller effective Y -sample size n. If δ 2 = c/α − σ 2 y for fixed c > 0, then the posterior concentrates on the same pair of (ϕ, θ)-values as α varies. As δ → ∞ (cut model), ϕ δ → ϕ * approaches the true value as the Z-model is not misspecified, and θ δ → θ * M X k+1 /M X 2 . If k = 1 there is no model-misspecification, and the pseudo-truth approach to the true values regardless to δ and α values. In this setting, if we are interested in estimating ϕ then we use the ELPD for z alone to define the optimal δ * -value as it favors a posteriorp (k) δ concentrated on ϕ * . It is given by (5.7) We can calculate the exact ELP D z in this example. However, in order to show how well the method works in practice, we instead estimate ELP D z in (5.7) using the LOOCV estimator We set δ * = arg max δ≥0 ELP D z (Y, Z; δ). We then estimate the posterior mean square In Figure 4 (top) we show how the posterior mean squared error varies as we increase the level of misspecification by varying k from k = 1 (no misfit) up to k = 2 (linear fit to quadratic). Each box shows the scatter of 100 P M SE ϕ -values estimated using 100 independent replicate data sets and associated δ * -values. At large k 2 the Cut model (green) gives a lower PMSE than Bayes (red). When k 1 the Bayes posterior is more concentrated on the true parameter. The LOOCV-selected δ-SMI posteriorp (k) δ * tracks the "best" of these two as k varies. One question is whether allowing δ to take values other than 0 or ∞ is actually adding anything. Our EPLD-utility targets prediction so of course δ-SMI does well on this criteria whilst the PMSE-gains are slight. Figure 4 (bottom) compares the exact ELP D z of the selected δ-SMI posterior with Bayes and Cut and shows the clear benefit of δ-SMI. This amounts to a test of the quality of the LOOCV estimation of ELP D z in (5.8). There may be some advantage in using δ * as a summative measure of misspecification. If it is very small, or very large, we use Bayes or Cut respectively. However, for intermediate values of k in Figure 4 we see that δ-SMI does slightly better than Bayes or Cut. For this range of k, the Bayes and Cut distributions are far apart, but the misspecification is not so bad that we gain by simply cutting feedback altogether. Carmona and Nicholls (2020) give an example for η-SMI in which more dramatic gains are seen from using intermediate values. Epidemiological data In our final example, we apply SMI to an epidemiological dataset introduced by Maucort- Boulch et al. (2008b), studying the correlation between human papilloma virus (HPV) prevalence and cervical cancer incidence, revisited by several authors including Plummer (2015) and Jacob et al. (2017) in the context of Cut models and Carmona and Nicholls (2020) for η-SMI. The model has two modules: in each population i = 1, ..., n, n = 13, a Poisson response for the number of cancer cases Y i in T i women-years of followup, and a Binomial model for the number Z i of women infected with HPV in a sample of size N i from the i'th population. For i = 1, ..., n, There are reasons to expect the Poisson module to be misspecified (Plummer, 2015). The relaxation of the Poisson likelihood under δ-SMI is defined by . Equation (5.9) becomes, for i = 1, ..., n, where F (· | ϕ i ,θ) is the Poisson CDF with mean µ i . Notice that when δ < 1 the set for that range of δ-values, as observed below Proposition 3.0.1. (2020), we use the ELP D y of the Poisson data (where ELP D y is defined in a similar way to ELP D z in (5.7)) as estimated by WAIC Vehtari et al. (2017) to select the δ-SMI distributionp (k) δ * with posterior predictive distribution most closely matching the true generative model, and compare against η-SMI, with η * chosen in the same way. Nested MCMC targeting the the δ-SMI posterior was implemented using STAN (Carpenter et al., 2017). Bayes δ -SMI η-SMI Cut posteriors are well separated. The two candidate δ-SMI and η-SMI distributions (purple and dashed-purple contours) in this figure are not those at δ * and η * respectively. Instead, we choose a "central" δ-value and then choose a corresponding η with a comparable ELP D y -value. This is done to show how similar the δ-and η-SMI posteriors for θ 1 and θ 2 are across the range of candidate posteriors, when we match them by their ELPD values. The δ-and η-SMI posteriors are of course identical at Bayes and Cut and this shows how similar they are over the range. Figure 6 presents the ELPD values of the candidate δ-SMI (crosses) and η-SMI distributions (red curve). For each δ there is an η giving the same ELPD. We find a monotone decreasing function transforming the η values. The function is chosen so that the ELPD trend across η matches that across δ as closely as possible. Following Carmona and Nicholls In Figure 6 (left), the Bayes posterior gives better posterior predictive performance for the Poisson data, the Y 's (largest ELP D y at small δ) so we choose Bayes when we choose δ to maximise ELP D y in the graph on the left. In this case the Y -model is misspecified so the well specified Binomial model "'helps" for Y -prediction. In contrast, if we care about predicting the Binomial data Z, so we select a δ-SMI posterior using ELP D z , then we see from Figure 6 (right) that the Cut model is favored: the Z model is well-specified, so information from the misspecified Y -model only worsens performance. The δ meta-parameter in δ-SMI seems more readily interpretable than the η metaparameter in η-SMI. Suppose we use the ELPD and select η * = 0.1. This seems rather far from Bayes at η = 1. However, based on the ELP D y values of the Poisson module shown in Fig. 6, η * = 0.1 gives a similar ELP D y to δ-SMI with δ = 8. Now typical values of the Poisson data Y are in the hundreds (the median is 162) so "coarsening" these data with a kernel of bandwidth δ = 8 should lead to a mild modification of the posterior. Many kernels would satisfy Proposition 3.0.1. We investigated sensitivity to the choice of kernel, considering in particular kernels in which the "bandwidth" δ was larger at larger Y -values (we used the top-hat kernel centred at y with width √ yδ). The results (which we do not report) were robust to this variation at least. Discussion In this paper we have extended the property of valid belief updates to prequentially additive losses. We gave some examples of prequentially additive losses arising in Cut models and three forms of SMI. These order-coherent inference schemes treat misspecification in models with multiple modules. One criticism of this program is that order-coherence is not axiomatic for misspecified models. However, it seems to us a desirable property if the fitted model imposes conditional independence. Another criticism we note in Section 4.2 is that Cut models and δ-SMI do not have correct Frequentist coverage of the pseudo-true parameters in the limit of many observations (Pompe and Jacob, 2021). However, first, we expect SMI to be useful when one module is well specified and we wish to bring in information from other potentially misspecified modules. In our running example, Figure 1, the Frequentist coverage of the asymptotic Cut-model posterior for ϕ under replication of the Z-data will be correct as that module is by assumption well-specified. Secondly, in our experiments in Section 5.2 on a small data set the distribution of PMSE values obtained for SMI under replication of the data was not worse than Cut and Bayes and often better. This behavior is observed over a range of different levels of misspecification using fitting methods that are available in realistic settings. Finally, the cut-alternative suggested in Pompe and Jacob (2021), which does have correct asymptotic Frequentist coverage, is not an order coherent belief update. A broader criticism is that the parameters of a strongly misspecified model loose the physical meaning they get from the generative model. Whilst prediction of new data still makes sense, parameter estimation does not. Again, this criticism does not arise when our aim is to control the flow of information from a misspecified model into a well-specified model and estimate parameters in the well specified model. We used the ELPD as a utility to select a belief update. In general the utility should take into account the objectives of analysis. The ELPD targets predictive performance. When our interest is in parameter estimation and not prediction, we use the ELPD as a proxy for a utility targeting the parameters. We can choose the data on which the ELPD is computed so that the ELPD is sensitive to the parameters we care about. For example, if our aim is to infer ϕ in Figure 1 then ELP D z in (5.7) is a natural choice. The model components identified as "modules" may to some extent be chosen in the analysis. A module may contain more than one distinct data type, or none. Modules with no data incorporate prior information, and this information may need to be cut, or modulated in the same way as any other source of information entering the analysis. Styring et al. (2017) and Yu et al. (2021) give Cut-model analyses, and Carmona and Nicholls (2020) and Styring et al. (2022) give η-SMI analyses of a hierarchical model for archaeological data in which one of the modules has a large vector of missing data, but no observed data. We have seen that δ-SMI and η-SMI can give very similar posteriors, identical in the simple normal example in Section 5.1, and in general depending on the chosen δ-SIM smoothing kernel K δ . We presented SMI as examples of Gibbs posteriors with loss functions which are only prequentially additive but give valid and order-coherent belief updates. The η-SMI family of posterior distributions are based on power posteriors. This is a natural choice, but not the only one available. Any order-coherent family of distributions interpolating Cut and Bayes is potentially of interest. One good feature of δ-SMI posteriors is that δ has the same dimension as the data Y , so the measure of misspecification has a simple interpretation. It is large or small compared to the variation in the sampled Y -values. Also, the δ-SMI posterior is a kind of ABC-posterior in which we condition on the data in some neigbourhood of the observed data. However, in δ-SMI this neighborhood is a product space of neighborhoods for each observation Y i , i = 1, ..., n, there is no "summary statistic" and we recover Bayesian inference as δ → 0. This kind of connection between ABC-like methods and misspecification has been noted elsewhere (Miller and Dunson, 2018). One other feature of δ-SMI distinct from η-SMI is that the likelihood relaxation p δ (Y |ϕ, θ) is itself a probability distribution normalised over the data. The powerlikelihood p(Y |ϕ, θ) η in η-SMI is not normalised in this way. It follows that the imputation distribution π (k) δ (ϕ,θ|Y, Z) is given by Bayes rule for the observation model p δ (Y |ϕ, θ). However the inference itself is not Bayesian, unless δ = 0, asp (k) δ (ϕ,θ, θ|Y, Z) is not given by Bayes rule. A number of extensions and variations seem possible. Goudie et al. (2019) consider multi-modular models which are in conflict because shared parameters have different priors in different models. They use Markov melding to bring these together in a single model with pooled priors. The pooled priors represent a kind of consensus across modules. This could be combined with SMI if some individual generative models are mispecified. In dictatorial pooling the pooled prior is taken to be the prior in one "authoritative" module. This may lead to misspecification in modules sharing the parameter. This is a setting suitable for SMI, where we know which modules are misspecified and need to modulate their influence on inference in the authoritative module. The Cut and Bayes posteriors can be replaced by distributions derived from the Posterior Bootstrap (Pompe and Jacob, 2021) or Bagged posteriors (Huggins and Miller, 2021) and this suggests δ-SMI-like sequences of distributions interpolating these "Cut" and "Bayes" distributions by adding "noise" with bandwidth δ to Y . Since these bootstrapped posterior distributions have good asymptotic Frequentist coverage of the pseudo-true parameters, at least for misspecified variance, it is to hoped that the δ-SMI sequence would inherit these properties. We now show that L(ν; Y, Z, π 0 ) in (2.13) is the only valid loss in the sense of Bissiri et al. (2016). Our proof shows that we can substitute prequential additivity for additivity in the Theorem in the supplement to Bissiri et al. (2016) which establishes KL as the unique prior to posterior loss in (2.13), so the following is very similar. Let ξ = (ϕ, θ) and O = (Y, Z) so the belief update is from π 0 (dξ) to ν(dξ) under the loss l(ξ; O, π 0 ). Denote by ξ ∈ Ω the parameter space of (ϕ, θ). We assume the total loss must be the sum of the expected loss and a prior to posterior divergence D g , that is, L(ν; O, π 0 ) = E ν [l(ξ; O, π 0 )] + D g (ν, π 0 ). Bissiri et al. (2016) justify this form which we take as given. They establish the valid belief update for the class of g-divergences, D g (ν, π 0 ) = g dν dπ 0 π 0 (dξ) with g a fixed differentiable and convex function from (0, ∞) to R satisfying g(1) = 0. Under these conditions they give a concise proof that D g must be the KL divergence (in fact, g(x) = kx log x + (g (1) − k)(x − 1) for some k > 0 -the extra terms integrate to zero). The authors cite Bissiri and Walker (2012) for a proof under weaker conditions. They show that over this class of g-divergences, the KL divergence is necessary and sufficient for the optimal belief update ψ to be order-coherent for every parameter space Ω and every loss such that the objects involved exist. First of all is clear that KL is sufficient for order-coherence as the optimal valid belief update is then equal to the Gibbs posterior ψ (q) (see below) and we have seen this is order-coherent under the conditions of Theorem 2.1. In order to show KL is necessary it is enough to give an example where the KL divergence is the only g-divergence giving coherence, so Bissiri et al. (2016) take a parameter space with just two states, Ω = {ξ 1 , ξ 2 } say. Let O (1) = (Y (1) , Z (1) ), O (2) = (Y (2) , Z (2) ) and q 1 (ξ) ∝ exp(−l(ξ; O (1) , π 0 ))π 0 (ξ) in Definitions 2 and 3, using the belief update ψ (q) which makes l(ξ; O,q) prequentially additive. Now take I 1 = (O (1) , π 0 ), I 2 = (O (2) ,q 1 ) and I = (O, π 0 ) in the proof page 3 of the supplement to Bissiri et al. (2016). We go through this to make it clear that everthing continues to fall into place and the presence ofq 1 inside the information I 2 is just what we need to make things work. We should keep in mind below that that π 0 andq 1 are fixed pieces of information inside I 1 and I 2 as p is varied.
2022-01-25T02:16:04.621Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "7b22d60aa5d251a0209e0f57b2f8fdc1211477c2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7b22d60aa5d251a0209e0f57b2f8fdc1211477c2", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248931068
pes2o/s2orc
v3-fos-license
Can Health Improvements from a Community-Based Exercise and Lifestyle Program for Older Adults with Type 2 Diabetes Be Maintained? A Follow up Study : Background: Older people consistently report a desire to remain at home. Beat It is a community-based exercise and lifestyle intervention that uses evidence-based strategies to assist older people with type 2 diabetes mellitus (T2DM) to improve physical and functional fitness, which are crucial to maintain independence. This follow up, real-world study assessed the efficacy of Beat It and whether older adults with T2DM were able to maintain improvements in physical activity, waist circumference and fitness one year post completion. Methods: We have previously reported methods and results of short-term outcomes of Beat It . This paper reports anthropometric measurements and physical fitness outcomes of Beat it at 12-months post program completion and compares them to validated standards of fitness required to retain physical independence. Results: Improvements that were observed post program were maintained at 12 months ( n = 43). While the number of participants who met fitness standards increased post program, not all increases were maintained at 12 months. Conclusions: This study provides promising early evidence that an eight-week, twenty-hour community-based clinician-led exercise and lifestyle program can improve health outcomes in older adults with T2DM which were retained for at least a year after program completion. Introduction In the field of health research, "translation" is the process through which breakthroughs in science are used to improve human health [1]. Research conducted in 'realworld' settings is essential to improving population health outcomes [2], particularly when we consider that it can take up to 17 years for research findings to be translated into practice [3]. Implementation focused healthcare research situated in real world settings is fundamental to rapid translation. Type 2 diabetes (T2DM) is one of the fastest growing health challenges this century, with the number of adults with diabetes more than tripling in 20 years [4]. The aging population is contributing to the diabetes epidemic, with older adults representing a rapidly growing group of people with T2DM [5]. Regular physical activity is fundamental to T2DM management and essential in its prevention. However, two-thirds of older Australians do not meet physical activity guidelines [6]. The importance of maintaining functional fitness to support everyday activity and maintain independence is well established in the literature [7][8][9][10]. In people with T2DM, poor physical fitness is associated with mortality from all causes and the risk of falls [11][12][13]. In contrast, good physical fitness is well known to extend years of active independent living, reduce disability and improve the quality of life for older people [14]. Interventions aimed to motivate and increase physical activity levels in older adults with T2DM are needed, and our earlier results about Beat It provided promising initial outcomes for people over 60 [15]. In the general adult population, supervised group exercise sessions are an accepted and effective strategy for increasing physical activity in community-settings [16,17]. In older adults with T2DM, small-scale community-based supervised group exercise programs have demonstrated effectiveness in improving physical fitness in the short term (immediately post-intervention) [15,[18][19][20][21][22], with few follow-up studies demonstrating that these improvements can be maintained for up to one year [23,24]. It is not clear whether such programs delivered across urban and regional community settings can deliver and maintain improvements in physical fitness and other health indicators over the long term. Beat It is an eight-week community-based clinician-led group exercise and education program that supports adults self-managing diabetes. It was first established by Diabetes NSW and ACT and is currently funded by the National Diabetes Services Scheme (NDSS). The eight-week health and physical fitness outcomes from this program have been published elsewhere [15]. This follow up study is aimed at assessing whether participants in this program were able to maintain improvements in physical activity levels, waist circumference and fitness (muscular strength and power, aerobic endurance, balance, and flexibility) one year post completion. Materials and Methods The eight-week Beat It program design, study recruitment and measures have been reported elsewhere [15]. In brief, Beat It consisted of group exercise sessions conducted twice each week under the supervision of an Accredited Exercise Physiologist (AEP). These sessions included a variety of moderate-intensity resistance, aerobic, flexibility and balance exercises. Each participant had a one-on-one initial consultation with the AEP, which included pre-exercise screening, baseline health and fitness measures and goal setting. Additionally, participants were provided with six group diabetes self-management education sessions over the eight-week period. To deliver the program, all AEPs completed a specialized facilitator training program, with this certification being refreshed every two years. This study employed a pre-post evaluation design where participants completed inperson physical assessment sessions at baseline, eight weeks, and 12 months post program. Gender, date of birth, and residential postcode were collected. Socioeconomic status was estimated as previously described [15] using the Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD), which ranks postcodes by relative socioeconomic advantage and disadvantage [25]. IRSAD was dichotomised into top and bottom 50% of deciles. In addition to sociodemographic variables, the study collected body mass index (BMI), waist circumference, lower body strength, aerobic capacity, flexibility, and balance. Participants with missing data for gender, age, postcode, waist circumference, and the physical assessment measures: 30 s sit-to-stand test. six-minute walk test (6MWT), and the chair sit-and-reach test, were excluded from analysis. The number of days per week participants engaged in aerobic and resistance exercise was collected at each timepoint. Aerobic exercise was dichotomised into less than three and three or more times per week, and resistance exercise was dichotomised into less than two, and two or more times per week. Participants were asked to rate their willingness to include planned exercise in the management of their diabetes, and their confidence to exercise using a 5-point LIKERT scale. At the 12-month timepoint, 5-point LIKERT scales were used to record whether participants consumed more vegetables; less unhealthy food; and included more incidental activity in their daily lives. These questions were constructed by Diabetes NSW and ACT specifically for the purpose of evaluating the initiative, and as such, are not validated tools (Supplementary Material). This study was approved by the Macquarie University Human Ethics Committee, protocol number 5201950887424. Data analysis was performed using SPSS version 27 (SPSS Inc, Chicago, IL, USA). Mean and standard deviations (SD) were calculated for continuous variables. Frequencies and percentages were calculated for categorical variables, excluding participants with missing data for that variable. Results A total of 43 people were included in the study. They attended the program at eight different locations across urban and regional New South Wales (NSW), were aged 60 years and over, had a diagnosis of T2DM, and had participated in Beat It. Of this cohort, 31 (72.1%) were female, age ranged from 61 to 81 years with a mean age 69.0 ± 4.2 years, 52.6% were born overseas, and 94.3% spoke English at home. Over one-third (37.2%) of participants were from lower socio-economic regions, and 58.1% resided within a major city, with the remainder residing in inner regional towns. Improvements in waist circumference, aerobic capacity, strength, flexibility, and balance were observed post-program in both male and female participants, which were maintained at 12 months (Table 1 and Figure 1). The number of participants who met the fitness standards considered appropriate for healthy independent living for older individuals increased post program, however not all increases were maintained at 12 months (Table 2). At 12 months, an increase in the proportion of participants who performed aerobic exercise three or more times per week (45.9% vs. 54.5%) and resistance exercise two or more times per week (32.4% vs. 58.8%) compared to baseline, was reported. Improvements in the proportion of participants willing to include planned exercise as a part of their diabetes management (64.9% vs. 79.0%), and their confidence to exercise (66.7% vs. 85.7%) were also observed at 12 months compared to baseline. These participants also agreed or strongly agreed that they currently ate more vegetables (85.7%), consumed less unhealthy foods and drinks (82.9%), and did more incidental activity (82.9%) than what they did prior to participating in Beat It. Improvements in the proportion of participants willing to include planned exercise as a part of their diabetes management (64.9% vs. 79.0%), and their confidence to exercise (66.7% vs. 85.7%) were also observed at 12 months compared to baseline. These participants also agreed or strongly agreed that they currently ate more vegetables (85.7%), consumed less unhealthy foods and drinks (82.9%), and did more incidental activity (82.9%) than what they did prior to participating in Beat It. Discussion This study found that the health benefits of an eight-week, twenty-hour communitybased clinician-led exercise and lifestyle program were retained for at least a year after program completion. This is an important finding because previous research shows that benefits of short-term interventions are difficult to maintain long term [29], and there are few studies conducted in real world settings [30]. This study population included a subset of participants in Beat It who completed pre and post measures. Whilst participants had Discussion This study found that the health benefits of an eight-week, twenty-hour communitybased clinician-led exercise and lifestyle program were retained for at least a year after program completion. This is an important finding because previous research shows that benefits of short-term interventions are difficult to maintain long term [29], and there are few studies conducted in real world settings [30]. This study population included a subset of participants in Beat It who completed pre and post measures. Whilst participants had the greatest benefit in the first eight weeks, most improvements were maintained including waist circumference, aerobic capacity, strength, flexibility, and balance. Typically, this population group experiences physical decline as they age, with greater health and personal care needs compounded by physical inactivity and the subsequent atrophy of muscle mass and the increase in fat mass leading to loss of mobility and functional independence [31]. Balance, strength, and gait training can improve function and independence of people with T2DM [12]; however, developing and sustaining behavioural change with diet and physical activity is difficult to achieve [32]. Strategies that have proven effective tend to be customized to the needs and priorities of individuals [32]. This study shows that it is possible with a relatively short, clinician-led program to sustain benefits which will likely enable greater independence for longer. The authors attribute this to the tailored and individualized program for each participant reflecting their complex needs, while also offering a social group setting. The expertise of the AEPs, who were able to motivate, encourage and provide feedback on performance, combined with the small group size, appears to have enabled a cost-effective approach to improving health outcomes for a highly vulnerable population. Given the rapidly increasing rates of T2DM in high income countries and the subsequent stress on aging and health care resources, programs like Beat It can contribute to improving quality of life, sustaining independence, and reducing healthcare needs. The program has now been scaled to 170 communities in NSW alone, 34% of which are rural, with over 120 AEPs having delivered the program. While current evidence and guidelines support a multidisciplinary approach to T2DM care [33], research reveals that AEPs as a workforce are underutilized in Australia's health system [34]. This underutilization is attributable in part to the Australian health-care system and the fact that general practitioners (GPs) are the gateway for patients to access subsidized care from allied health providers. One study examining the trends and characteristics of GP referrals to AEPs from a sample of over 680,000 patient encounters with over 7000 GPs found that there were only 619 referrals to AEPs [35]. The low referral rates to AEPs are ascribable to clinician perceptions about non-medical treatments, concerns about responsibility of care, a view of perceived patient disinterest in lifestyle interventions and a perceived lack of change in chronic condition post-referral [36]. Diabetes NSW and ACT who deliver the Beat It program have been able to circumvent this issue, as individuals with T2DM proactively seek medical clearance from their GP to participate. This promotes patient engagement with their GP, as well as builds awareness of the key role that AEPs can play in improving health outcomes through the supervision of an exercise intervention. Further education of GPs and other primary health care referrers about the role of AEPs in the prevention and management of chronic disease is needed. This needs to be supported by broader interprofessional collaboration in the management of T2DM given the chronic nature of the condition and the quality of life effects of proactive, multidisciplinary management [33]. The primary limitation of this study was the small sample size, with 43 Beat It participants completing all evaluation measures. This reflects the real-world nature of the study and that people consented to research as an adjunct to a program to improve management of T2DM. Information regarding co-morbidities, insulin dependence and length of time since a participants' diabetes diagnosis was not available. Embedding data collection for Beat It within existing reporting systems will likely increase the quality and size of data in the future. Despite the small sample, the results are reliable because they apply across gender and a range of sociodemographic criteria. From a health economical perspective, there is evidence of lower health care utilization and costs in T2DM individuals who meet minimum physical activity guidelines [17,37]. Diabetes NSW & ACT engaged in an independent economic analysis of the program, which found that Beat It generated substantial value for participants with every $1 spent generating a social value return on investment of between 3.5 and 6.5 dollars. According to this analysis, on a per participant basis, the program creates approximately $800 of value for individuals and $1800 for the healthcare system [38]. The economic analysis used a social return on investment [39] method which assigned value to benefits such as reduced GP visits, reduced hospital presentations and admissions, reduced cardio-metabolic risk, and consequent reductions in long term complications from T2DM. These data were compared with the total cost of delivering the program. Further research on the health economics of Beat It is warranted to assess the costs and benefits over time. With the COVID-19 pandemic, Beat It was adapted to an online format. Further research will be needed to assess whether the benefits of Beat It are transferrable online and to determine the impact on SROI. Further research about the specific aspects of Beat It that have the greatest effect would also be valuable. Conclusions This study offers important findings for practitioners and policy makers seeking to maximise and sustain behaviour change in older people with T2DM, maximizing the quality of life and maintaining independence. T2DM is endemic in high income countries [4] and Beat It offers an affordable and scalable solution with sustained benefits to individuals and the wider community. Further evaluation of Beat It when adapted for Indigenous and other culturally and linguistically diverse communities, in addition to it being delivered online, will provide greater insights into the efficacy of this promising program. Funding: Beat It program delivery costs were funded by the National Diabetes Services Scheme. The authors received no funding to carry out the current study. The National Diabetes Services Scheme had no role in the outcomes reported in this study. Diabetes NSW and ACT co-funded the publication costs for the manuscript. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Macquarie University Human Ethics Committee (protocol number 5201950887424 27/02/2019). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data that support the findings of this study are available on request from the corresponding author, Morwenna Kirwan.
2022-05-21T15:21:40.286Z
2022-05-19T00:00:00.000
{ "year": 2022, "sha1": "b9ec8ccb925a79793821ed00c437e3298721971b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4540/3/2/25/pdf?version=1653379169", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c5985690e0c8cbd3d6ada7a94f3fa214ba3ce7a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
245336870
pes2o/s2orc
v3-fos-license
Is kallikrein-8 a blood biomarker for detecting amnestic mild cognitive impairment? Results of the population-based Heinz Nixdorf Recall study Background Kallikrein-8 (KLK8) might be an early blood-biomarker of Alzheimer’s disease (AD). We examined whether blood KLK8 is elevated in persons with amnestic mild cognitive impairment (aMCI) which is a precursor of AD, compared to cognitively unimpaired (CU) controls. Methods Forty cases and 80 controls, matched by sex and age (± 3years), were participants of the longitudinal population-based Heinz Nixdorf Recall study (baseline: 2000–2003). Standardized cognitive performance was assessed 5 (T1) and 10 years after baseline (T2). Cases were CU at T1 and had incidental aMCI at T2. Controls were CU at T1 and T2. Blood KLK8 was measured at T2. Using multiple logistic regression the association between KLK8 in cases vs. controls was investigated by estimating odds ratios (OR) and 95% confidence intervals (95%CI), adjusted for inter-assay variability and freezing duration. Using receiver operating characteristic (ROC) analysis, the diagnostic accuracy of KLK8 was determined by estimating the area under the curve (AUC) and 95%CI (adjusted for inter-assay variability, freezing duration, age, sex). Results Thirty-seven participants with aMCI vs. 72 CU (36.7%women, 71.0±8.0 (mean±SD) years) had valid KLK8 measurements. Mean KLK8 was higher in cases than in controls (911.6±619.8 pg/ml vs.783.1±633.0 pg/ml). Fully adjusted, a KLK8 increase of 500pg/ml was associated with a 2.68 (1.05–6.84) higher chance of having aMCI compared to being CU. With an AUC of 0.92 (0.86–0.97), blood KLK8 was a strong discriminator for aMCI and CU. Conclusion This is the first population-based study to demonstrate the potential clinical utility of blood KLK8 as a biomarker for incipient AD. Supplementary Information The online version contains supplementary material available at 10.1186/s13195-021-00945-x. (beta-amyloid (Aβ, A), tau (T), and neurodegeneration (N), (ATN)) are measured in cerebrospinal fluid (CSF) or by neuroimaging techniques like positron emission tomography and magnetic resonance imaging [3]. As the measurement of these biomarkers is expensive and invasive, it is not applicable in primary care settings. Thus, less-invasive and less-expensive blood-based biomarkers of tau and Aβ have been developed and ultrasensitive immunoassay techniques allow to measure even small amounts of brain-specific proteins in blood samples [4]. Other biomarkers of AD are required in addition to the standard ATN marker. The extracellular serine protease kallikrein-8 (KLK8, alias neuropsin) is a well-known, dose-dependent modulator of neuroplasticity and memory [5][6][7][8]. It unfolds its effects in part by processing the substrates neuregulin-1 [9], neuronal cell adhesion molecule L1 [10], fibronectin [11], and ephrin receptor B2 [12] and thereby regulates different neuroplasticity associated signaling pathways [7,[13][14][15]. Our knowledge about the role of KLK8 in pathophysiological processes is, however, very limited. KLK8 is thought to be associated with epilepsy [16], depression [17], and multiple sclerosis [18], but until recently, almost nothing was known in the context of AD. Our lab was the first to show pathologically high levels of KLK8 mRNA and protein in different regions of transgenic murine [19] and AD-affected human brain [19,20] long before the clinical signs of disease appear. This exceedingly early (in mice prior to onset of amyloid pathology; in patients in CERAD A/Braak I-II stage) and multifocal rise of KLK8 [19] is suggestive of a causal role in the cascade of events leading to AD. This hypothesis is supported further by the fact that even a short-term inhibition of this enzyme by an anti-KLK8-antibody [19,21] or its long-term downregulation by genetic knockdown [22], exerted considerable therapeutic effect in transgenic mice. It impedes amyloidogenic amyloid-precursor-protein processing, facilitates Aβ clearance, boostes autophagy, reduces Aβ load and tau pathology, enhances neuroplasticity, improves memory, and unfolds anxiolytic effects [19,21,22]. Furthermore, KLK8 levels were also elevated in blood and CSF of patients with MCI due to AD and early AD [23]. It has been shown that the diagnostic accuracy of CSF KLK8 was as good as that of core CSF biomarkers (Aβ, phosphorylated tau and total tau) for AD and in case of MCI even superior to CSF Aβ 42 . Blood KLK8 was a similarly strong discriminator for MCI due to AD but slightly weaker for AD [23]. Thus, KLK8 might not only be a therapeutic target but also a very early biomarker of AD. The aim of the present case-control study was to examine whether blood KLK8 is elevated in participants with amnestic MCI (aMCI) compared to cognitively unimpaired (CU) participants in a population-based sample. Study design and study population The analysis is based on data of the longitudinal population-based Heinz Nixdorf Recall (Risk Factors, Evaluation of Coronary Calcification, and Lifestyle) study (HNR study). Details of the study design and cohort have been described previously [24]. Briefly, participants for this study were randomly selected inhabitants of the Ruhr area living in Essen (589,676 residents), Bochum (371,582 residents), and Mülheim/Ruhr (172,759 residents). Baseline examination from 2000 to 2003 included 4814 men and women age 45-75 years with an overall recruitment efficacy proportion of 55.8% [25]. Participants received annual questionnaires and follow-up examinations after 5 years (T1) and 10 years (T2). In our case-control study, we included participants which were CU or had subjective cognitive decline (SCD) at T1 and had an incidental aMCI diagnosis at T2 (cases, for definition of aMCI see below). Controls had to be CU at T1 and T2. At least to T2, cases and controls were free of the secondary diseases mentioned below. Two controls were assigned to each case, matched by sex and age (± 3 years). Supplementary material 1 shows the results of the matched case-control power analysis. Figure 1 shows the flow-chart of the study population. All participants provided written informed consent. The study was approved by the University of Duisburg-Essen Institutional Review board and followed established guidelines of good epidemiological practice. Survey of secondary diseases Participants received standardized computer-assisted interviews and were asked whether they have/had any of the following conditions: ulcerative colitis, Crohn's disease, Parkinson's disease, rheumatoid arthritis, chronic polyarthritis, ankylosing spondylitis, stroke, or tumor disease before T2. Additionally, they were asked in yearly postal follow-up questionnaires about having stroke or cancer. Participants who denied those diseases until T2 and had no high-sensitivity C-reactive protein (hsCRP) ≥ 1 mg/dl (reference value: 0.3 mg/dl) at T2 were classified as free of those diseases and were included into our study. We decided to exclude participants with Parkinson's disease, stroke, and tumor disease to exclude cognitive impairment due to those diseases. As there is some evidence of an association between KLK8 and inflammation [26], we also decided to exclude participants with signs of inflammation. Measurement of KLK8 Details of the KLK8 measurements and technical specifications of the utilized KLK8 enzyme-linked immunosorbent assay (ELISA) kit have been described previously [23]. Briefly, KLK8 measurements in T2 blood were performed in the central reference laboratory at the Institute of Neuropathology, University of Duisburg-Essen. Experimenters were blinded to participants' diagnoses. KLK8 levels were measured in duplicate using a commercially obtained ELISA kit (#EK0819, Boster Biological Technology, Pleasanton, CA, USA) following the manufacturer's instructions. Kits came from two lots. Experimenter 1 measured in February 2020 and experimenter 2 and 3 in March 2021. Serum samples were diluted 1:2 in sample buffer. Agreement between the two measurements was assessed graphically using a scatter plot and a Bland-Altman plot (Supplementary Figs. S1 and S2). Only two subjects had mean KLK8 values that were outside the 95% confidence interval, implicating a high reliability of KLK8 measurements. Cognitive performance A standardized cognitive performance assessment in the HNR study was introduced at T1 and was extended for T2. Details of the cognitive performance assessment have been described previously [27,28]. In brief, cognitive performance at T1 was assessed with five subtests. These included established measures of immediate and delayed verbal memory (eight word list [29], speed of processing/problem solving (Labyrinth test [29]), verbal fluency (semantic category "animals" [30]), and visuo-spatial ability (clock-drawing test [31]). For a detailed assessment description, see Wege et al. [32]. The MCI diagnosis at T1 based on these five subtests was validated in a previous study. The short cognitive performance assessment showed a good accuracy compared to a detailed neuropsychological and neurological examination (area under the curve = 0.82, 95% confidence interval = 0.78-0.85). Regarding T2, the cognitive performance assessment was extended by Trail Making Test A (TMT A9), Trail Making Test B (TMT B [33]), and a short version of the Stroop task named Color-word test [29] (card 1: word reading; card 2: color naming; card 3: color-word interference condition; card 3 minus card 2: interference performance [34]). For the five subtests that have already been administered at T1, z-transformation of the raw data at T2 using our own defined norm-data from T1 was performed: Raw data were z-transformed based on the mean and SD of the appropriate age-and education group at T1 (age: 50-59 years, 60-69 years, and ≥70 years; education: ≤10 years, 11-13 years, and ≥14 years) [28]. For the subtests of the extended cognitive performance, z-transformation was based on the same education groups and the following three age groups from T2: 55-64 years, 65-74 years, and ≥75 years. Except for the clock-drawing test, the age-and education-adjusted test scores were scaled to have a mean of 10 and a standard deviation (SD) of 3 [28,35]. The administered tests were grouped into four domains: (1) (4) visuoconstruction-clock-drawing test [28]. Within each domain, newly scaled scores of the tests were added. To account for the differing numbers of tests in each domain, domain scores were then scaled to have a mean of 10 and a SD of 3. Cognitive impairment was defined as a performance of more than one SD below the mean (≤7) in at least one total domain score of the domains attention, executive function, verbal memory, or as a score of ≥3 in visuoconstruction [28,31]. We have used one SD below the mean to rate a domain as impaired as suggested by Albert et al. [36] for the core clinical criteria of MCI. The diagnosis of dementia was based on DSM-IV dementia diagnosis criteria [37] requiring cognitive impairment to be "significant" and affect activities of daily living. In our study, the "significance" of cognitive impairment was defined by two standard deviations below the age-and education-adjusted mean as our standard-a criterion that is now part of the DSM-5 definition of major neurocognitive disorder [38]. Further, dementia diagnosis was defined as a previous physician's diagnosis of dementia or taking cholinesterase inhibitors (anatomic-therapeutic-chemical classification issued by the World Health Organization (WHO), code: N06DA) [39] or other anti-dementia drugs (N06DX). We excluded six participants with dementia at T1 and three at T2. Definition of cases and controls Cases were participants with an aMCI diagnosis based on meeting all of the following published Winblad et al. aMCI criteria [40]: (1) cognitive impairment in the verbal memory domain (with or without impairments in other above named three domains); (2) subjective cognitive decline; (3) normal functional abilities and daily activities; and (4) no dementia diagnosis (definition see above). This definition is equivalent with the core clinical criteria of "MCI due to AD" according to Albert et al. [36] in the absence of further biomarker information. To examine incident aMCI, participants with MCI, dementia or participants who fulfilled criteria 1, 3, and 4 of the MCI diagnosis without criterion 2 at T1 were excluded (objective impairment without SCD). Controls were CU at T1 and T2. Thus, their cognitive performance was within the age-and education-adjusted range in all domains, and they did not report subjective cognitive decline at T1 or T2. Assessment of covariates Body mass index at T2 (BMI in kg/m 2 ) was calculated from measured height and weight. Education until T2 was classified according to the International Standard Classification of Education (ISCED-97) as total years of formal education, combining school and vocational training [41]. 'Current smoking' at T2 was defined as a history of cigarette smoking during the past year, 'Past smoking' as quitting smoking more than a year ago, otherwise no. 'Sports' at T2 was defined as 'yes' when practiced one or more sports in the last 4 weeks prior to the interview, otherwise no. Blood pressure categories were defined according to the Joint National Committee 7 guidelines [42]. 'Hypertension' at T2 was defined as stage 1 or 2, otherwise no. Current depressive symptoms at T2 were assed using the German 15-item short form of the Center for Epidemiologic Studies Depression Scale (CES-D). The cut-off point for 'elevated depressive symptoms' was ≥18 [43]. 'Diabetes mellitus' at T2 was defined present when participants reported a diagnosis of diabetes mellitus, or used antidiabetic medication, or had an elevated fasting serum glucose of ≥ 200 mg/dl. Statistical analysis Descriptive statistics were performed. To compare cases and controls p-values were estimated with Wilcoxon two sample test (continues variables, not normally distributed) or chi-square test (nominal variables). Box plots were created to show the distribution of mean KLK8 by strata according to cognitive status. Box plots were created to show the distribution of the predictive values of KLK8 according to cognitive status, adjusted for cognitive status, freezing duration, age, sex, and inter-experimenter variability, which should be understood as a proxy for the inter-assay variability (and hereinafter referred to as 'inter-assay variability'). Using conditional multiple logistic regression, the association between KLK8 and aMCI compared to CU was determined by estimating odds ratios (OR) and 95% confidence intervals (95%CI), adjusted for inter-assay variability and freezing duration. The diagnostic performance of KLK8 was determined using receiver operating characteristic (ROC) analyses, adjusted for inter-assay variability, freezing duration, age, and sex. All analyses were performed using SAS 9.4 (Statistical Analysis System Corp., Cary, NC, USA). Results Our study population comprises 40 cases with incident aMCI with frozen blood samples at T2, which were free of the above-mentioned diseases at T2. N=526 participants were CU at T1 and T2 and free of the above-mentioned diseases at T2. After matching for sex and age ±3 years, we had 80 controls with frozen blood samples at T2 (Fig. 1). In three cases and eight controls, KLK8 was below detection threshold. Table 1 shows the characteristics of the study population according to the cognitive status. Out of 109 participants, 36.7% were women and the mean age was 71.0±8.0 years. The 37 participants with aMCI had a higher KLK8 mean value compared to the 72 CU participants (911.6±619.8 pg/ml vs. 783.1±633.0 pg/ml). Participants with aMCI were more often APOE ε4 positive compared to CU participants (48.6 vs. 26.4%) and had lower z scores in all four domains. According to the cutoff (CES-D≥18) 16% of the participants with aMCI had depression, and CU were free of depression. At T1, only four participants with aMCI had depression, and CU at T1 also were free of depression (data not shown). The mean freezing duration was lower in participants with aMCI compared to CU participants (7.5±1.0 years vs. 8.3±0.7). There were no major differences between cases and controls regarding the other variables (age, BMI, education, smoking status, sports, hypertension, and diabetes mellitus). Differences in KLK8 values in groups are also shown in box plots in Fig. 2 (mean values according to strata) and Fig. 3 (adjusted predictive values of KLK8). After adjustment KLK8 levels are generally higher in aMCI than in CU. Table 2 shows the results of the conditional logistic regression analyses to estimate the association between KLK8 and cognitive status. Fully adjusted, a KLK8 increase of 500pg/ml was associated with a 2.68-fold (95%CI: 1.05-6.84) higher chance of having aMCI compared to being CU. When excluding participants with elevated depression (CES-D≥18) form our analyses, our results on the association of KLK8 and aMCI did not change (Supplementary material Table S1). Using the same and less stringent aMCI diagnostic criteria of T1 also at T2 did not affect the degree of KLK8 association with aMCI (Supplementary material Table S2). Figure 4 shows the ROC curve after adjustment for inter-assay variability and freezing duration. The area under the curve (AUC) is 0.92 (95%CI: 0.86-0.97). The crude ROC curve is presented in Supplemental material Fig. S3. Discussion Our study is the first population-based case-control study to investigate whether KLK8 is a suitable blood-based biomarker for the diagnosis of incident aMCI, a precursor of AD. We found an increase of blood KLK8 of 500 pg/ml to be associated with a 2.68-fold increased odd of an aMCI diagnosis in comparison to cognitively healthy participants. The diagnostic performance of blood KLK8 for aMCI was very good with an AUC of 0.92. Our results are quite in line with the recently published data from Teuber-Hanselmann et al. [23] that showed the diagnostic accuracy of CSF KLK8 to be as good as accuracy of core CSF biomarkers, i.e., Aβ42/Aβ40 or phosphorylated tau with blood KLK8 being a similarly strong discriminator for MCI (AUC: 0.94; 0.86-1.00) as in the present study. However, the KLK8 cut-off point of 1121 pg/ml in that study was associated with a 130fold increased odd of MCI due to AD (95% CI: 15-1100) in comparison to controls. When applying the same cut-off value to our sample, we found a 2.62fold increased odd of aMCI (0.61-11.33; data not shown). This is most likely due to methodological differences between both studies. Teuber-Hanselmann et al. examined participants form a clinical setting not from a population-based sample. It is more likely that these participants were more severely affected by their cognitive symptoms and were hospitalized to evaluate the symptoms in a clinical setting. Furthermore, the clinical diagnoses were based on neuropsychological assessments and biomarker evidence for AD pathophysiological processes [36,44]. Thus, their MCI due to AD cases can be considered to be "truly" on the Alzheimer's continuum. Our aMCI diagnosis was solely based on the "core clinical criteria" by Albert et al. [36] as we did not have biomarker information. There is also a difference in the control group between studies. In the study by Teuber-Hanselmann et al., the control group was very heterogeneous consisting of healthy participants and patients with headache, psychiatric diseases, 13:202 or Parkinson's disease. Additionally, storage duration of the samples differed considerably in both studies, with 50% or more storage time in the present study. Thus, both studies cannot be compared directly. However, both studies show that the diagnostic accuracy of blood KLK8 is very high. The diagnostic accuracy of KLK8 is not only important for an early AD diagnosis, but based on its therapeutic potential shown in transgenic mice [19], possibly also relevant for patient stratification in therapeutic settings. So far, clinical trials using Aβ and tau as therapeutic targets have led to contradictory and sobering observations [45], thus making the need for new players even direr. Although different KLK8 substrates, e.g., ephrin receptor B2 (EPHB2) [46], fibronectin [47], neuregulin-1 [48], and the neural cell adhesion molecule L1 (L1CAM) [49], have been long known to be directly involved in the pathophysiology of AD, it was not until recently that the role of KLK8 in the context of AD was recognized. Our lab was the first to demonstrate elevated cerebral KLK8 levels in the very early stages of AD disease, i.e., in AD patients with the onset of the first Aβ plaques and in transgenic mice even before the onset of Aβ pathology [19]. Moreover, several aspects of AD pathology could be alleviated by antibody-mediated inhibition of KLK8 [19,21] or the genetic knockdown of KLK8 in transgenic mice [22]. Interestingly, single nucleotide polymorphisms (SNPs) in KLK8 [50] similar to CD33 [51], TOMM40 [52], and APOE [53] are all located in the same chromosomal region 19q13 which apparently is strongly associated with AD risk. The present study alongside with the aforementioned findings adds now another piece evidence to prove a role for KLK8 in the emergence of AD. Strengths Our study has several strengths. Our cases and controls are derived from a large, randomly selected population-based sample with high quality of data collection and processing which was confirmed by external certification of the HNR study. All cases and controls were free of other major diseases and in the normal range of inflammatory parameters. Furthermore, we were able to perform an excellent matching of cases and controls by sex and age. Unlike in the previous case-control study [23], collection of blood was performed in one central reference laboratory and the storage duration of samples was very similar. Limitations The major limitation is that our aMCI diagnosis is not based on AD biomarker information. Thus, we cannot state whether our aMCI participants are on the "Alzheimer's continuum" as defined by Jack et al. [3]. However, we have excluded participants with Parkinson's disease, stroke, and tumor disease to exclude cognitive impairment due to those diseases. Nonetheless, a confounding effect of an unknown comorbidity or premedication cannot be entirely excluded. A further limitation is the size of the finally analyzed population. Our study was primary designed as a cardiovascular health study on myocardial infarction with long follow-up times between examinations (every 5 years) and a relatively young cohort of participants (45-75 years at baseline). Thus, we were not able to include a group of participants with incident AD to this analysis at the current point. Currently our dementia endpoint committee 13:202 is gathering data from follow-up questionnaires and medical reports to identify those individuals who progressed to AD in the further course. Blood KLK8 levels of these participants will be then determined at T2 (>5 years before AD onset) and T1 (>10 years before AD onset) to verify KLK8 potential in predicting presymptomatic AD. Conclusions Our study is the first population-based case-control study to demonstrate the capability of KLK8 as a blood biomarker for early diagnosis of AD. A 500 pg/ml increase in blood KLK8 was associated with an almost threefold increased chance of an aMCI diagnosis compared to cognitively unimpaired participants. The diagnostic performance of KLK8 in blood for aMCI was very good with an AUC of 0.92. Larger validation studies in a longitudinal design are now warranted.
2021-12-21T14:33:00.458Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "e7d5aad050e2e6920804ef74970f07761f794809", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "50455fb09d5eb7c213fff0583e60beb5fc6c5544", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
90661493
pes2o/s2orc
v3-fos-license
Chemical and bioactive comparison of Panax notoginseng root and rhizome in raw and steamed forms Background The root and rhizome are historically and officially utilized medicinal parts of Panax notoginseng (PN) (Burk.) F. H. Chen, which in raw and steamed forms are used differently in practice. Methods To investigate the differences in chemical composition and bioactivities of PN root and rhizome between raw and steamed forms, high-performance liquid chromatography analyses and pharmacologic effects evaluated by tests of anticoagulation, antioxidation, hemostasis, antiinflammation, and hematopoiesis were combined. Results With the duration of steaming time, the contents of ginsenosides Rg1, Re, Rb1, Rd, and notoginsenoside R1 in PN were decreased, while those of ginsenosides Rh1, 20(S)-Rg3, 20(R)-Rg3, Rh4, and Rk3 were increased gradually. Raw PN samples steamed for 6 h at 120°C with stable levels of most constituents were used for the subsequent study of bioeffects. Raw PN showed better hemostasis, anticoagulation, and antiinflammation effects, while steamed PN exhibited stronger antioxidation and hematopoiesis activities. For different parts of PN, contents of saponins in PN rhizome were generally higher than those in the root, which could be related to the stronger bioactivities of rhizome compared with the same form of PN root. Conclusion This study provides basic information about the chemical and bioactive comparison of PN root and rhizome in both raw and steamed forms, indicating that the change of saponins may have a key role in different properties of raw and steamed PN. Introduction The root and rhizome are historically and officially utilized medicinal parts of Panax notoginseng (PN) (Burk.) F. H. Chen, which is one of the major herbal plants in genus Panax (Araliaceae) [1]. PN has been widely used for treating blood disorder diseases for centuries, which is now included as a herbal medicine in pharmacopeias of China, USA, Britain, Europe and so on and also included as a dietary supplement by the US Dietary Supplement Health and Education Act in 1994 [2e6]. During hundreds of years of medication, there was a description for PN properties that "the raw materials eliminate and the steamed ones tonify". The so-called "eliminate" means raw PN can stop bleeding, eliminate blood stasis, promote blood circulation, diminish swelling, and ease pain. The "tonify" means that steamed PN can nourish the blood and improve the health [7,8]. Pharmacologic studies have shown that the effects of PN changed when steamed. Lau et al [9] compared the bleeding time (BT) in rats orally treated with raw and steamed PN roots and found that the treatment with raw PN extract resulted in shorter BT compared with rats treated with the steamed PN extract. Besides, raw PN extract displayed a much better lipid-lowering effect than steamed PN by investigating the levels of total cholesterol and triglyceride in steatotic L02 cells [10]. Such difference might be attributed to the change in the contents, relative proportion, and structures of chemical constituents in PN. Wang et al [11] used high-performance liquid chromatography (HPLC) to analyze saponins of PN root during the steaming process and found that the contents of five main saponins (ginsenosides Rg 1 , Rb 1 , Rd, Re, and notoginsenoside R 1 ) in raw PN root were decreased gradually and other new saponins were formed. Some of those newly produced constituents were responsible for the effect of steamed PN. Sun et al [12] reported that the steaming process influenced significantly the transformation of Rg 3 , an anticancer compound, of which the content was 5.23-fold higher in root steamed for 2 h at 120 C and 3.22-fold higher when steamed for 4 h than for 1 h at 120 C. Nevertheless, most pharmacologic studies on raw and steamed PN were focused on the root, whereas little attention was paid to the effect difference between raw and steamed form of rhizome, another officially medicinal part of PN. According to recent researches, the chemical composition of different parts of PN was reported to be various. Wang et al [13] employed ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry to compare quantitatively eight dammarane-type saponins in different parts of raw PN, finding that the content of each saponin was always the highest in rhizome, followed by main root and branch root, and then fibrous root, which was consistent with previous researches [14,15]. Other components such as polysaccharides, flavonoids, and heavy metals were also found to be distributed differently in different parts of PN [16e18]. Among those components, dammarane-type saponins (including ginsenosides and notoginsenosides) are considered to be major active ones of PN, which might contribute to its pharmacologic and therapeutic effects [19]. However, the bioeffects related to the traditional uses of PN root and rhizome have not been compared comprehensively. Modern pharmacologic researches attributed the bloodnourishing and body-tonifying functions of herbal medicines to their antioxidant and immunomodulatory activities [20]. And since the blood deficiency in the theory of traditional Chinese medicine (TCM) is considered to be similar to the blood-loss anemia, blood routine indicators such as the levels of white blood cell (WBC), red blood cell (RBC), hemoglobin (Hb), and platelet are often used to evaluate the efficacy of blood nourishing of medicines [21,22]. In this work, saponins in PN root and rhizome during the steaming process were investigated to determine the suitable steaming condition for steamed PN. And we carried out a comprehensive comparison of pharmacologic effects of PN root and rhizome in raw and steamed forms firstly, which were evaluated by anticoagulation, antioxidation, hemostasis, and antiinflammation tests, as well as the model of hydracetin-induced anemia mice. Sample preparation The roots and rhizomes of PN were obtained from a single batch of samples in Yunnan, China. Steamed PN samples were prepared by steaming the crushed raw PN root and rhizome in an autoclave (Shanghai, China) for 2, 4, 6, 8, and 10 h at 105 C, 110 C, and 120 C, respectively. The steamed powder was then dried in a heating-air drying oven at about 45 C to constant weight, then powdered and sieved through a 40-mesh sieve. Animals Kunming (KM) mice, male and female, weighing 18e22 g, were purchased from Tianqin Biotechnology Co. Ltd., Changsha, Hunan [SCXK (Xiang) 2014-0011]. Before the experiments, the mice were given 1-week acclimation period in a laboratory at room temperature (20e25 C) and constant humidity (40e70%) and fed with standard rodent chow and tap water freely. Animal experimental procedures in the study were strictly conformed to the Guide for the Care and use of Laboratory Animals and related ethics regulations of Kunming University of Science and Technology. HPLC analyses The sample solutions were prepared according to the method described in Chinese Pharmacopoeia [2]. A mixed standard solution, containing (in mg/mL) 0.40 notoginsenoside R 1 , 0.55 ginsenosides Rg 1 , 0.50 Re, 0.60 Rb 1 , 0.50 Rd, 0.60 Rh 1 , 1.00 Rk 3 , 1.00 Rh 4 , 0.45 20(S)-Rg 3 , and 0.55 20(R)-Rg 3 , was prepared by adding each standard into a volumetric flask and dissolving with methanol. A series of standard solutions of seven concentrations were prepared by diluting the mixed standard solution with methanol for the determination of the standard curves. HPLC analyses were done on a 1260 series system (Agilent Technologies, Santa Clara, CA, USA) consisting of a G1311B Pump, a G4212B diode-array detector, and a G1329B autosampler. A Vision HT C 18 column (250 mm  4.6 mm, 5 mm) was adopted for the analyses. The mobile phase consisted of A (ultrapure water) and B (MeCN). The gradient mode was as follows: 0e20 min, 80% A; 20e45 min, 54% A; 45e55 min, 45% A; 55e60 min, 45% A; 60e65 min, 100% B; 65e 70 min, 80% A; 70e90 min, 80% A. The flow rate was set at 1.0 mL/ min. The detection wavelength was set at 203 nm. The column temperature was set at 30 C and sample volume was set at 10 mL. Anticoagulation test in vitro Blood was collected via the posterior orbital venous plexus of mice anesthetized with ether and was directly transferred into citrated tubes (0.109 M citrate, 9:1). The supernatant platelet-poor plasma (PPP) was obtained by centrifuging the blood samples above at 3000 rpm for 10e15 min. The mixture of PPP and thrombokinase of various concentrations at the proportion of 2:1 (v/v) of total 50 mL was added into the test cup and incubated for 3 min at 37 C in a blood coagulation instrument (XN06 series, Diagnostic Technology Ltd of Wuhan, Jingchuan, China). 10 U/mL thrombokinase of 100 mL dissolved in 0.1 mol/L TriseHCl buffer solution (pH 7.4) was subsequently added and incubated at the same condition. The prothrombin time (PT) was determined in accordance with the manufacturer's recommended protocols. The prolongation rate of PT was calculated according to the following equation: where, PT 0 was the prothrombin time of control (blank, the normal saline replaced of thrombokinase) and PT was the prothrombin time in the presence of thrombokinase. The standard curve was drawn with the concentration of thrombokinase (U i ) as the X axis and the lg [prolongation rate of PT (%)] as the Y axis. PN samples of 5 g, in the powdered form, were extracted with pure water (50.0 mL) by refluxing twice for 2 h at 80 C. The combined solution was filtered and concentrated under reduced pressure to the extract containing 0.1 g/mL of PN. The extract was then diluted with the normal saline to different concentrations. The PT of the mixed plasma sample containing PPP and PN extract (PT 0 ) of different concentrations was determined. The prolongation rate of PT 0 was calculated according to the following equation: where, PT 0 0 was the prothrombin time of control (blank, the normal saline replaced of extracts) and PT 0 was the prothrombin time in the presence of extracts. The corresponding concentration of thrombokinase (U i ) was determined according to the standard curve and the thromboplastin inhibition rate (%) was calculated according to the following equation: where, U i was the concentration of thromboplastin determined by the standard curve. Antioxidation test in vitro The extracts prepared in "Section 2.5" were diluted with normal saline to 0.5, 1, 1.5, 2, 2.5, 3, and 3.5 mg/mL, respectively. The scavenging activities for DPPH free radicals and hydroxyl radicals were measured according to the procedure described by Zhao et al [23]. Hemostasis test in vivo Sixty KM mice, half male and half female, were randomly divided into six groups, namely the control group, Vitamin K group, low-dose raw PN group, low-dose steamed PN group, moderatedose steamed PN group, and high-dose steamed PN group. The mice in control group were administered with 0.9% normal saline, whereas other groups were administered with Vitamin K (0.4 g/kg), raw PN powder (0.4 g/kg), and steamed PN powder (0.4 g/kg, 0.8 g/ kg, and 1.6 g/kg, respectively), respectively, by gavage for 7 days. The tail bleeding model was based on previous methods with minor modifications [24,25]. After 1 h of the last administration following anesthetization, the mice tail was precisely transected at 5 mm from the tip. The time between the start of transaction to bleeding cessation was recorded as the BT. Bleeding cessation was considered to be the time when the flow of blood stopped for at least 30 s. After 1 h of the last administration following anesthetization, the blood was collected via the posterior orbital venous plexus with a glass capillary. The glass capillary filled with blood was broken off alternately from both ends in every 30 s, in order to investigate the presence of blood coagulation. Once the blood coagulation was presented in either end, the other end was broken off to verify. The time between the BT to the presence of blood coagulation was recorded as the coagulation time (CT). Anti-inflammation test in vivo Forty KM mice, half male and half female, were randomly divided into six groups, namely the control group, aspirin group, raw PN group, and steamed PN group. The mice in control group were administered with 0.9% normal saline, whereas other groups were administered with aspirin (0.45 g/kg), raw PN powder (1.8 g/kg), and steamed PN powder (1.8 g/kg), respectively, by gavage for 5 days. Xylene-induced ear edema model was applied to evaluate the antiinflammatory activity based on previous methods with minor modifications [26,27]. After 30 min of the last administration, edema was induced in each mouse by applying 50 mL of xylene to the inner surface of the right ear. Mice were sacrificed under ether anesthesia after 45 min. Both ears were cut off, sized, and weighed to calculate the swelling degree and the inhibition rate of swelling. Measurements of blood parameters Seventy KM mice, half male and half female, were randomly divided into seven groups, namely the control group, model group, Fufang E'jiao Jiang (FEJ) group, high-dose raw PN group, low-dose steamed PN group, moderate-dose steamed PN group, and highdose steamed PN group, 10 mice in each group. The hydracetin and cyclophosphamide-induced anemia model was applied to evaluate the "blood tonifying" function of PN based on previous methods [28]. The anemia model was established by intraperitoneal injection of cyclophosphamide of 70 mg/kg for the first 3 days and hypodermic injection of hydracetin of 0.02 mg/kg at the fourth day. Mice in the control group were administered with 0.9% normal saline, whereas other groups were administered with FEJ (8 mL/kg), raw PN powder (1.8 g/kg), and steamed PN powder (0.45 g/kg, 0.90 g/kg, and 1.8 g/kg, respectively), respectively, by gavage for 12 days. Then the blood was collected for the routing blood analysis, including levels of WBC, RBC, Hb, and platelet after 30 min of the last administration. Statistical analyses Results were obtained in the mean AE standard deviation. SPSS v21.0 (Statistical Program for Social Sciences, Chicago, IN, USA) was applied to carry out the t test and principal component analysis (PCA). A p value < 0.05 was considered significant. A p value < 0.01 was considered highly significant. The median effective concentration (EC 50 ) value was fitted by probit regression with Origin 7.5 for windows (OriginLab, Northampton, MA, USA). Heml 1.0 (CUCKOO Workgroup, Wuhan, Hubei, China) was used for the clustering analysis and drawing heatmap. HPLC analyses The results of HPLC analyses indicated a distinct difference in saponin composition between the raw and steamed PN. During the steaming process, the contents of five major saponins in the raw PN, including ginsenosides Rg 1 , Re, Rb 1 , Rd, and notoginsenoside R 1 , were decreased gradually, whereas other new saponins were formed (Figs. 1A, 1B). By comparing the chromatograms of PN samples to that of the mixed standard solution, five new converted saponins were identified as ginsenosides Rh 1 , 20(S)-Rg 3 , 20(R)-Rg 3 , Rh 4 , and Rk 3 (Figs. 1C, 1D). The result was coincident with the report by Wang et al [11]. That transformation might be due to the cleavage of glycosidic bond induced by high temperature. The hydrolyzation or dehydration at C-20 could form new saponins during the steaming process. Therefore, high temperature was helpful for these reactions. By comparing the steaming conditions, it was easy to find that the content change of saponins was in a time-dependent manner at lower steaming temperatures of 105 C and 110 C. While for the same steaming time, the formation as well as the degradation of saponins was in a temperature-dependent way. Ten saponins were clustered into two groups, namely the content-decreased group including ginsenosides Rg 1 , Re, Rb 1 , Rd, and notoginsenoside R 1 , and contentincreased group including ginsenosides Rh 1 , 20(S)-Rg 3 , 20(R)-Rg 3 , Rh 4 , and Rk 3 (Figs. 2A, 2B). After 6 h of steaming at 120 C, the content change of most saponins in PN root and rhizome became steady. The overall levels of saponins in PN rhizome were higher than those in the root. According to the PCA result in Fig. 2C, PN samples could be well classified into four groups according to the steaming temperature, with the total variance of 98.634%. PCA1 and PCA2 accounted for 54.997% and 43.636%, respectively. It indicated that the levels of saponins were significantly different in raw PN and samples steamed at 105 C, 110 C and 120 C. However, samples of different steaming time could not be distinguished (Fig. 2D). Based on the above analysis, PN root and rhizome steamed for 6 h at 120 C were determined as steamed samples for the subsequent pharmacologic tests. Anticoagulation test in vitro PT is used to evaluate the overall efficiency of extrinsic clotting pathway. A prolonged PT indicates a deficiency in coagulation factors V, VII, and X [29]. In the study, the EC 50 determined by the logarithm of PT prolongation rate was applied to evaluate the anticoagulation effect of PN. The standard curve between the concentration of thrombokinase and logarithm of PT prolongation rate was shown in Fig. 3A, exhibiting a good linearity (R ¼ À0.9991). According to the comparison of effect in Fig. 3B, raw PN showed a significantly stronger anticoagulation than steamed PN. The result provides evidence for the traditional use of PN, that raw PN is better in the efficacy of activating blood flow and removing blood stasis than steamed PN, of which the beneficial effect could be related to the coagulation system. Meanwhile, for different parts of PN under the same process, the EC 50 values of PN rhizome were lower than PN root, indicating that the anticoagulation effect of rhizome was better than the root, which could be related to the higher levels of saponins in the PN rhizome. Antioxidation test in vitro PN root and rhizome are mostly consumed as popular food tonic in the soup form by people in the southern region of China. Various studies have suggested that the tonifying functions of Chinese herbal medicines could be due to, at least partially, the protective effects against oxidation [20]. DPPH molecule that contains a stable free radical has been widely used to evaluate the radical scavenging ability of antioxidants, and hydroxyl radical is very reactive which can be generated in biological cells through the Fenton reaction. DPPH and hydroxyl radical scavenging assays are commonly used for the determination of antioxidant activities of plant extracts. Therefore, the two methods were applied to investigate the antioxidation effect of raw and steamed PN root and rhizome, with results shown in Fig. 4, where the median inhibitory concentration (IC 50 ) was the concentration of PN scavenging 50% free radicals. L-ascorbic acid was served as the positive control, exhibiting a stronger activity of scavenging DPPH free radicals than scavenging the hydroxyl radicals. Conversely, PN samples showed higher IC 50 values of scavenging DPPH radicals than those of scavenging hydroxyl radicals, suggesting that PN could eliminate hydroxyl radicals at lower concentration. Although both raw and steamed PN reacted directly with DPPH and hydroxyl radicals, steamed PN root and rhizome showed stronger scavenging effects on radicals generally than the corresponding raw part. Since the antioxidation activity is partially correlated with the tonifying function of herbal medicines [20], the result gives evidence for the efficacy difference of raw and steamed PN that steamed PN is better in the tonifying effect than the raw form. For different parts of PN under the same process, PN rhizome also showed significantly stronger activity of scavenging both DPPH and hydroxyl free radicals than the root in the same steaming condition (p < 0.05). Hemostasis test in vivo BT and CT of mice were affected by the interplay of factors involved in platelet aggregation and plasma coagulation. To gain insights into the overall hemostatic effect in vivo, we investigated the effect of raw and steamed PN root and rhizome on BT and CT, using an established mice's tail bleeding model. Vitamin K, which was served as the positive control, shortened BT and CT significantly compared with the blank control group. As shown in Fig. 5, oral treatment with steamed extract resulted in longer BT and CT compared with mice treated with raw PN. Dencichine, a nonprotein amino acid showing beneficial effects on hemostasis, presents a higher content in the raw PN [30] and is often denatured under high temperature, which provides a plausible explanation for the hemostatic difference between raw and steamed PN. For different parts of PN, BT of mice treated with PN rhizome was significantly shorter than those treated with the same dose of root (p < 0.05), and mice treated with high-dose (1.6 g/kg) steamed PN rhizome showed significantly shorter CT than mice treated with the same dose of steamed root (p < 0.05). Antiinflammation test in vivo The ear edema model permits the evaluation of antiinflammatory steroids and is less sensitive to nonsteroidal antiinflammatory agents [27]. As shown in Fig. 6, compared with the control group, oral administration of aspirin at 0.45 g/kg inhibited the development of ear edema induced by the topical application of xylene significantly. The swelling degrees after treating PN root and rhizome were inhibited at the dose of 1.8 g/kg significantly (p < 0.05 for PN root and p < 0.01 for PN rhizome) compared with the control group. The inhibition by 1.8 g/kg of PN rhizome was greater than the same dose of root significantly (p < 0.01), suggesting that PN rhizome had better antiinflammatory effect. For the same part of PN under different process, there was no significant difference between raw and steamed PN, except for the inhibition rates between the treatment of raw and steamed PN root (p < 0.05). Blood parameters after treating raw and steamed PN root Anemia is a very common and difficult-to-treat syndrome which is regarded as the decrease of hemoglobin in modern medicine, which is also considered to be similar to the blood deficiency in the theory of TCM [21]. The treatment of blood deficiency often includes the use of medicinal formula with blood-enriching and body-tonifying functions. [31]. In the research, FEJ was served as the positive control to investigate the blood-tonifying function of PN. Cyclophosphamide, as a cytotoxic agent used commonly to induce aplastic anemia, is applied to establish the anemia model. After the administration for 15 days, the quantities of WBC, RBC, Hb, and platelet from the peripheral blood of mice were shown in Fig. 7. Compared with the control group, the levels of WBC, RBC, Hb, and platelet in the model group were decreased significantly (p < 0.01), indicating the anemia model was established successfully. Compared with the model group, WBC and Hb levels in mice treated with FEJ and three doses of steamed PN were increased significantly (p < 0.01), and the RBC and platelet levels of the FEJ group, moderate-dose and high-dose steamed PN groups were increased significantly (p < 0.05). Besides, there were no significant differences in the levels of the above four parameters between the control group and steamed PN groups (except the RBC level induced by low-dose steamed PN), suggesting that steamed PN could reverse the decrease of the quantities of WBC, RBC, Hb, and platelet in a dose-dependent manner significantly. Whereas for raw PN, it elevated the level of WBC significantly, but made no significant difference to levels of RBC, Hb, and platelet compared with the model group, indicating that the blood-enriching effect of raw PN was generally weaker than steamed PN. According the results, steamed PN could enhance the hematopoietic effect of mice with chemotherapy-induced anemia, which was consistent with the traditional use of steamed PN [8]. Conclusions PN root and rhizome have been used traditionally in both the raw and steamed forms. However, the chemical composition and bioactive effects related to the clinic use of raw and steamed PN medicinal parts have not been compared comprehensively, lacking evidence for the differentiated use of PN root and rhizome under different processing conditions. In the research, we confirmed clearly the better anticoagulation, hemostasis and antiinflammatory effects of raw PN, as well as the stronger antioxidation and hematopoiesis effects of steamed PN, which was consistent with the properties description of PN that "the raw materials eliminate and the steamed ones tonify". These differences in effects are probably attributed to the chemical composition of PN in different forms. For example, the higher levels of ginsenosides Rg 1 , Re, Rb 1 , Rd, and notoginsenoside R 1 were shown in raw PN, whereas the increased levels of Rh 1 , 20(S)-Rg 3 , 20(R)-Rg 3 , Rh 4 , and Rk 3 were investigated in steamed PN. For different parts of PN, levels of those saponins in PN rhizome were found to be generally higher than those in the root, which could be related to the stronger activities of rhizome including anticoagulation, antioxidation, hemostasis, and antiinflammation effects compared with the same form of PN root. The present study provided an experimental basis for applying raw and steamed PN in clinic differentially, that raw PN is preferable in treating hemorrhages, blood stasis, and swelling, while steamed PN is more beneficial for nourishing the blood and tonifying the body. Furthermore, active constituents correlated to different pharmacologic effects of PN and mechanisms involved will be investigated in the subsequent research. Conflicts of interest The authors have declared no conflicts of interest.
2019-04-02T13:12:10.499Z
2017-11-21T00:00:00.000
{ "year": 2017, "sha1": "06f929c6c6704136120bc6475061fcb4d8a78513", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jgr.2017.11.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cdbcfc59ba1279ae89cf40b1f6a910c9fb29cb9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
251809700
pes2o/s2orc
v3-fos-license
Time to rechallenge primary prevention ICD guidelines Patients, therapy, and evidence have evolved in leaps since 2002, when MADIT-2 first demonstrated a mortality benefit from using primary prevention ICD therapy. From 1995 to 2014, there has been a 44% decline in sudden death across trials. This is attributed to an improvement in adherence to heart failure (HF) medical therapy, namely, angiotensin inhibition (ACEI/ARB), beta-blockers, and mineralocorticoid receptor antagonists (MRA). There is also a contemporary shift in the demographics of HF patients, who are older and have more co-morbidities, leading to a competing risk of non-sudden cardiac death (SCD). In addition to medical therapy and lifestyle modification, structural improvements in hospital and community pathways to access early HF specialist input have also advanced. In addition to established therapies, landmark trials using sacubitril–valsartan (ARNi) and sodium–glucose co-transporter 2 inhibitors (SGLT2i) have each demonstrated a further ~20% reduction in ventricular arrhythmia (VA) and SCD. There is now evidence to support that ARNi use improves cardiomyocyte electrophysiological remodelling with a reduction in QTc, QRS duration, and mechanical dispersion at 6 months. SGLT2i also act on multiple electrophysiological characteristics (viz. Ca regulation, late Na and Na/hydrogen-exchanger currents), which may similarly contribute to their anti-arrhythmic properties outside of their ability to improve left ventricular ejection fraction (LVEF). Recent observational evidence demonstrates that patients who receive guideline-directed medical therapy have an almost fourfold reduction in risk of death at 2 years compared with those not on therapy, conferring a 34% reduction in risk of death for each drug added in patients who have an ICD implanted. Current ESC guidelines recommend ICD implantation with a Class 1a indication, in those with ischaemic HF with an LVEF <35% despite optimal medical therapy (OMT) for ≥3 months, symptoms (NYHA II/III), a QRS < 130 ms, and a life expectancy of at least 1 year. Considering there have been no new ICD trials in ischaemic HF since SCD-HeFT, published in 2005, these recommendations seem outdated. For non-ischaemic HF, new evidence from the DANISH trial softened the ICD indication from a Class I indication in 2016 guidelines to a IIa classification in 2021. The recommendation of OMT for ≥3 months before ICD implantation may be premature. Post hoc analyses of ARNi therapy suggest that there are still improvements in LVEF well beyond 3 months. At 6 months after ARNi initiation, 32% of ICD eligible patients at baseline become ineligible due to LVEF improvements, and by 12 months, that proportion almost doubles to 62%. ICD implantation at 3 months would, in majority of patients, be too soon, as their LVEF would still be on the upward trajectory, with many being able to avoid an ICD if given more time for left ventricular recovery. This is likely also the case for improvements in NYHA functional classification. ESC guidelines advise that those with NYHA Class I do not benefit from primary prevention ICD therapy. ARNi therapy improves NYHA class by a mean difference of 0.79 and SGLT-2i by an odds ratio of 1.3, and ~70% of patients enrolled in PARADIGM and EMPEROR-reduced were categorized as NYHA Class II. This suggests a proportion of NYHA Class II patients, treated with ARNi and SGLT-2i therapy improve to below NYHA Class II, thereby no longer meeting ICD indication under the current guidelines. These benefits have been recently demonstrated in a multi-centre Italian registry, and furthermore, the benefit of SGLT2i on VA/SCD risk have been demonstrated to be incremental past 12 months. Delaying implantation and thereby reducing the number of unnecessary ICD implants would not only reduce patient risk associated with implantation (5–10% risk of infection/ pneumothorax/ lead displacement) and inappropriate shocks (~20% lifetime risk) but also improve patient psychological stress caused by fear of inappropriate shocks. Additionally, reducing implantation of unnecessary ICDs would be cost-saving, not only for the device cost, generator replacements and initial complications (de novo device infection costs ~€23 000) but also for follow-up pacing clinics as well as patient travel and convenience. ED ITORIAL Patients, therapy, and evidence have evolved in leaps since 2002, when MADIT-2 1 first demonstrated a mortality benefit from using primary prevention ICD therapy. From 1995 to 2014, there has been a 44% decline in sudden death across trials. 2 This is attributed to an improvement in adherence to heart failure (HF) medical therapy, namely, angiotensin inhibition (ACEI/ARB), beta-blockers, and mineralocorticoid receptor antagonists (MRA). 2 There is also a contemporary shift in the demographics of HF patients, who are older and have more co-morbidities, leading to a competing risk of non-sudden cardiac death (SCD). In addition to medical therapy and lifestyle modification, structural improvements in hospital and community pathways to access early HF specialist input have also advanced. 3 In addition to established therapies, landmark trials using sacubitril-valsartan 4 (ARNi) and sodium-glucose co-transporter 2 inhibitors 5 (SGLT2i) have each demonstrated a further~20% reduction in ventricular arrhythmia (VA) and SCD. There is now evidence to support that ARNi use improves cardiomyocyte electrophysiological remodelling with a reduction in QTc, QRS duration, and mechanical dispersion at 6 months. 6 SGLT2i also act on multiple electrophysiological characteristics (viz. Ca 2+ regulation, late Na + and Na + /hydrogen-exchanger currents), 7 which may similarly contribute to their anti-arrhythmic properties outside of their ability to improve left ventricular ejection fraction (LVEF). Recent observational evidence demonstrates that patients who receive guideline-directed medical therapy have an almost fourfold reduction in risk of death at 2 years compared with those not on therapy, conferring a 34% reduction in risk of death for each drug added in patients who have an ICD implanted. 8 Current ESC guidelines 9 recommend ICD implantation with a Class 1a indication, in those with ischaemic HF with an LVEF <35% despite optimal medical therapy (OMT) for ≥3 months, symptoms (NYHA II/III), a QRS < 130 ms, and a life expectancy of at least 1 year. Considering there have been no new ICD trials in ischaemic HF since SCD-HeFT, 10 The recommendation of OMT for ≥3 months before ICD implantation may be premature. Post hoc analyses of ARNi therapy suggest that there are still improvements in LVEF well beyond 3 months. At 6 months after ARNi initiation, 32% of ICD eligible patients at baseline become ineligible due to LVEF improvements, and by 12 months, that proportion almost doubles to 62%. 4 ICD implantation at 3 months would, in majority of patients, be too soon, as their LVEF would still be on the upward trajectory, with many being able to avoid an ICD if given more time for left ventricular recovery. This is likely also the case for improvements in NYHA functional classification. ESC guidelines advise that those with NYHA Class I do not benefit from primary prevention ICD therapy. ARNi therapy improves NYHA class by a mean difference of À0.79 11 and SGLT-2i by an odds ratio of 1.3, 12 and~70% of patients enrolled in PARADIGM and EMPEROR-reduced were categorized as NYHA Class II. This suggests a proportion of NYHA Class II patients, treated with ARNi and SGLT-2i therapy improve to below NYHA Class II, thereby no longer meeting ICD indication under the current guidelines. These benefits have been recently demonstrated in a multi-centre Italian registry, 13 and furthermore, the benefit of SGLT2i on VA/SCD risk have been demonstrated to be incremental past 12 months. 5 Delaying implantation and thereby reducing the number of unnecessary ICD implants would not only reduce patient risk associated with implantation (5-10% risk of infection/ pneumothorax/ lead displacement 14 ) and inappropriate shocks (~20% lifetime risk 15 ) but also improve patient psychological stress caused by fear of inappropriate shocks. Additionally, reducing implantation of unnecessary ICDs would be cost-saving, not only for the device cost, generator replacements and initial complications (de novo device infection costs~€23 000 16 ) but also for follow-up pacing clinics as well as patient travel and convenience. The counter argument in favour of early implantation is that some patients while awaiting for NYHA/LVEF improvements beyond 3 months may have a fatal arrhythmia, but by implementing effective personalized risk stratification, this would be minimised. Risk stratification Prolongation of QRS is associated with increased SCD risk. 17 Guidelines recommend those with a QRS > 150 ms with LBBB to be treated with CRT-D therapy, with strong evidence of morbidity and mortality benefit. 9 For those patients with a narrow QRS, there is less evidence of the benefit of a defibrillation device. MADIT-2 demonstrated that despite the majority of enrolled patients having a QRS duration of <150 ms, the mortality reduction in these patients treated with ICD was not statistically significant. 1 This therefore leads to the question: Would all patients with HFREF, a narrow QRS complex and on current OMT, benefit from an ICD? Or perhaps, only subgroups who are high risk should be offered ICD therapy? There is emerging evidence of patient characteristics, imaging and biomarkers that may aid future stratification of those most likely to benefit from primary prevention ICD therapy. The DANISH trial, published in 2016, found no all-cause mortality benefit to ICD therapy in those with non-ischaemic cardiomyopathy. This trial was before the widespread use of ARNi and SGLT2i. In post hoc propensitymatched analyses of PARADIGM-HF, 18 ARNi in addition to ICD use was found to have a larger impact on SCD in non-ischaemic cardiomyopathy compared with those with an ischaemic aetiology. This reinforces the improved risk of those with a non-ischaemic aetiology who are on current OMT. Cardiovascular magnetic resonance (CMR) has emerged as an important tool for VA risk assessment. Presence and burden of myocardial fibrosis (a well-established substrate for VA) using late gadolinium enhancement has become more widely accessible and utilized. 19 CMR GUIDE 20 is an important ongoing randomized controlled trial, which plans to use CMR to identify fibrosis and randomize patients with mild to moderately impaired LV systolic function (LVEF 36-50%) to ICD vs. implantable loop recorder (ILR). The primary endpoint will be SCD/VA. This study may further highlight the importance of myocardial fibrosis as a risk factor, independent to LVEF. Elevated NT-proBNP has also been shown to increase the likelihood of VA/SCD and therefore is an important stratification marker for those most likely to benefit from ICD therapy. This has also been shown in DAPA-HF post hoc analyses, 5 demonstrating that NT-proBNP was the largest predictor of VA/SCD outside of previously documented VA (i.e. already had an indication for secondary prevention ICD therapy). As panels of biomarkers become cheaper and more accessible, additional biomarkers such as ST2 and galectin-3 may become standard practice for composite biomarker risk stratification. Although the EU-CERT-ICD 21 controlled multicentre cohort study recently showed a mortality benefit to ICD therapy, the patient cohort was not on contemporary OMT. It did however highlight two important non-benefitting subgroups: those with diabetes and those aged >75 years old. Therefore, as the HF population ages with more co-morbidities such as diabetes, this may reduce the overall benefit of primary prevention ICD. ILRs provide the advantage of continuous monitoring of cardiac rhythm over a 3-year period with daily remote transmission. It is acknowledged that they currently do not deliver direct therapy in the event of a VA, resulting in SCD; however, they enable early detection for secondary VA prevention, and in the future, they may be able to immediately notify the nearest emergency service in the case of sustained VA. The future of non-invasive monitoring devices, which includes photoplethysmography technology in wrists-worn devices, 22 is showing promise, at their ability to monitor for life-threatening arrhythmias. As these become more widely validated and available, these are likely going to be central to appropriate risk stratification. SCD risk models such as the Seattle Proportional Risk Model have been validated in large cohorts. 23 Despite its limitations (developed in 2015, before the widespread use of sacubitril-valsartan/SGLT2-i and does not incorporate CMR), it identifies patient characteristics such as a younger age, being male and those with an elevated BMI that confer a higher SCD risk and, conversely, those with diabetes and renal dysfunction who are at lower risk. This risk model could be incorporated into ICD therapy decisions, rather than using solely LVEF/NYHA, but are also not included in current 2021 ESC guidelines. We therefore feel these guidelines offer limited insight into personalisation of risk/benefit of ICD therapy in the current era of modern HF management. Future considerations Although our perspective may not be welcomed by ICD manufacturers or some implanting physicians, it is time for a trial to determine whether today, primary prevention ICD implantation would still convey any prognostic benefit in ischaemic and non-ischaemic HFREF, in patients with a narrow QRS duration in the current era of disease-modifying therapy. Such a trial could also identify a more refined risk stratification system (rather than LVEF and NYHA classification alone) such as distribution of fibrosis on CMR (size, location, and extent), use of wearable technology, patient characteristics, and bio-Editorial markers. Such a trial could also offer guidance on timing of ICD therapy. It could be conceived that close arrhythmia monitoring with ILR or future wearable technology could be the most effective strategy in high-risk patients on modern OMT. There are no trials planned in intermediate/high-risk HFREF patients; however, the planned PROFID project, 24 which will compare low risk HFREF patients to OMT, with and without ICD, is currently recruiting with results hopefully in 2025. Additionally, this EU-funded randomized open-label trial will also challenge the use of LVEF and risk of VA/SCD, by a second study, randomizing those with an LVEF >35% who are at high risk of SCD, to an ICD alongside OMT. This trial, we hope will provide more robust evidence towards a more personalized and effective approach to primary prevention ICD therapy. Until then, we are left with an unsettling feeling of whether old evidence still holds true for implanting primary prevention ICDs in an older HF population in the current era of OMT.
2022-08-26T06:17:07.476Z
2022-08-24T00:00:00.000
{ "year": 2022, "sha1": "3b5bf6a707acfc664b1a57c43b913f898956e639", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ehf2.14113", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "00761957f7e404f007669e230934b3eac74bd9bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225265122
pes2o/s2orc
v3-fos-license
Dependence of Weed Composition on Cultivated Plant Species and Varieties in Energy-Tree and -Grass Plantations : Energy plantations create new habitats in agricultural landscapes with species compositions di ff erent from those in forests or farmlands. The purpose of our nine-year research project (2010–2018) was to evaluate the dependence of weed-species richness and their selected ecological aspects on stands of energy-plant species, and varieties in energy-tree and -grass plantations in conditions of Central Europe, on the basis of a case study. The permanent research plots were established in plantations containing two varieties of willow (Tordis and Inger), one poplar variety (Pegaso), and one clone of Miscanthus × giganteus . This evaluation included the species composition of understory flora, habitat preferences of di ff erent species, life cycle, life forms, ecological demands, and the harmfulness of these weed-species. The ground flora of energy plantations is predominantly composed of synanthropic plants of a weedy character with di ff erences in species composition among di ff erent energy-tree and -grass species and varieties. The total number of vascular plant species was 98. The highest number of species (58) was recorded in the Tordis and Inger willow varieties, and the lowest was observed in the Pegaso poplar variety (45). Perennial species prevailed by their share, 10 of which were found in all four research plots. Therophytes and hemicryptophytes prevailed. Most species have high light requirements and are typical for mild-to-warm suboceanic areas, demanding freshly moist alkalic soils that are medium-to-rich in mineral nitrogen. Fifty percent of all observed species are considered weeds in Slovakia. The “very dangerous” category represented 46.94% of weeds, the “less dangerous” category 51.02%, and the “nondangerous” category represented 2.04% out of 49 species. The biggest share of “very dangerous weeds” was found in the poplar stand (38.78%), less in willow (32.65% and 28.57%), and the least in miscanthus stands (26.53%). The weeds of the Tordis variety were relatively poorly influenced by specific environmental conditions, and the weeds of the Inger variety were mainly defined by the soil reaction. Weeds in the undergrowth of both Miscanthus × giganteus and poplar trees (Pegaso) had the greatest a ffi nity to mineral nitrogen content and temperature requirements. Introduction Agricultural-land use is one of the key pressure points on biodiversity [1]. The impact of agriculture on biodiversity is attributed to, first of all, the conversion of natural ecosystems to crop fields, but also of 9.6 ºC (average temperature ranges from 15 to 17 • C during the growing season from March to October), and average rainfall of 560 mm. Soil at the research site features fluvial soils and Haplic Luvisol. The average soil pH value was 7.25 (in CaCl 2 ), the average content of nitrogen was 1479 mg/kg (according to Kjeldahl), and the average percentage of humus content was 2.31% (based on C ox ). The terrain-steepness aspect was a plain without a sign of surface erosion (0 • to 1 • ). The plantations were established in the spring of 2009, with an effective mechanical and chemical weed control eradicating all weeds in the first year. During the first winter after establishing the plantation, the stems were cut back to ground level to support the growth of multiple stems. From 2010, no agricultural practices have been provided except of harvesting the woody biomass in the fifth and eighth year after planting (willow, poplar) and the herb biomass every year (miscanthus). The research area consisted of three blocks, each block was divided in three or five fields: one block for five different willow varieties (the selected target varieties for our research analysis were Tordis and Inger), one for five poplar varieties (the selected variety was Pegaso) and one for three miscanthus clones (the selected clone was Miscanthus × giganteus). Each field with one variety or clone within a block had a total area of 75 m 2 and compact blocks were separated by 5 m wide ploughed corridors between each other. Each energy-plant species or variety had a stand with an extent of 75 m 2 . The area of the permanent sampling plots (1 sampling plot = 1 quadrant) was 25 m 2 (2.5 × 10 m) per species or variety and was situated in the middle of the studied field to exclude the edge effect. Description of Species and Varieties Tordis (Salix schwerinii × Salix viminalis) × Salix viminalis is a hybrid between Swedish varieties Tora and ULV, and Inger (Salix triandra × Salix viminalis) is a cross between a Russian clone and Swedish variety Jorr. The Salix plantations were established by planting cuttings in double rows with a distance of 1 m between rows in the double row. The distance between two double rows was 2 m. Poplar variety Pegaso is a hybrid of Populus × generosa × Populus nigra, planted in double rows spaced 1 × 0.75 m, and the distance between the double rows was 2 m. Miscanthus × giganteus is a sterile hybrid that cannot form fertile seeds as a consequence of its triploidy [21,22]. The miscanthus plantation was established by planting the rhizomes with a spacing of 1 × 1 m. Sampling and Data Analysis The method used for recording the weed composition was sampling by phytocoenological relevés from the whole permanent sampling plots of 25 m 2 . In the sampling survey, the presence of species and their relative abundance were assessed using a modified scale of covers and abundance according to Braun-Blanquet [23,24]. The survey was performed during nine consecutive growing seasons between 2010 and 2018 at 14-day intervals (from March to April, 16 times a year). The species nomenclature was updated according to The Plant List [25] created by the Royal Botanic Gardens (Kew) and the Missouri Botanical Garden. When assessing the proportion of weed species, we considered a weed to be a plant, the population of which grows entirely or predominantly in situations markedly disturbed by humans (without being deliberately cultivated [17]). Weed species were included in systems valid for the environmental conditions of Slovakia according to Líška et al. [26]. The Jaccard index (Jaccard similarity coefficient) was used to assess differences in species composition of weeds on the four research plots. The index measures the similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets as follows: where J(A, B), the Jaccard index; A, the number of plant species in a research plot; and B, the number of plant species in the compared plot. Species habitat preferences were based on categories described by Ellenberg et al. [27], and Dölle and Schmidt [28]: arable field (a), grassland (g), ruderal site (r), and woodland (w) species. Some species were assigned to more than one group of habitat preferences (a/r; g/r, respectively). According to life cycle, annual plants (one-year life cycle), biennials (two-year life cycle), and perennials (three-or-more-year life cycle) were distinguished. Tree and shrub species were grouped separately for their realized or potential ecological traits and functions. The life forms were classified according to Raunkiaer [29] and sensu Ellenberg et al. [27]: T, therophytes; G, geophytes; H, hemicryptophytes; C, herbaceous chamaephytes; z, ligneous chamaephytes; N, nanophanerophytes; and P, ligneous phanerophytes. We considered this classification for the potential life cycle of weeds contrary to the fact that all studied spontaneous plants in our research were manifested exclusively in the herbaceous layer reaching the lower layer of shrubs (usually up to 0.3-0.6 m, rarely up to 1 m). As a consequence of short cycle harvests, the trees and shrubs only grew as juvenile plants comparable to herbs by their size and ecological functions. In our case, juvenile ligneous phanerophytes and ligneous chamaephytes ("woody plants") were classified as part of the herbaceous layer. Ecological demands of species were based on values of Ellenberg, including requirements for light, temperature, continentality, moisture, soil reaction, and nitrogen [27]. Ecological analysis of the weed composition in each sample plot was done using multivariate-ordination methods in Canoco software for Windows, version 4.5 (Microcomputer Power, Ithaca, NY, USA, sensu ter Braak [30], ter Braak and Šmilauer [31]). The method used for this analysis was chosen for linear and statistically indirect interactions (principal component analysis, PCA) between the average Ellenberg indicator for each factor in each variant. From the viewpoint of harmfulness, weeds were classified into three groups [26]. Plants that have massive growth are deep-rooted, and pose serious danger to crops even at low numbers are usually considered "very dangerous weeds". It is necessary to pay increased attention to these weeds and to apply radical (mechanical or chemical) or combined (mechanical-chemical) measures. Species with high reproductive coefficients must be regulated even at low occurrences. "Less dangerous weeds" (occasional and transient) are of medium size and do not present a potential danger to the crops in the integrated stands under a normal weed-infestation rate. "Less significant (negligible) weeds" or "nondangerous weeds" are species with a smaller (shorter) growth, which are located in the ground layer of the stand and are less overgrown. Under normal occurrences, these weeds do not pose a serious danger to crops. Special regulatory interventions (the application of targeted radical chemical and/or mechanical measures) are unnecessary; it is sufficient to use common agrotechnical operations, and to maintain an integrated and complete crop stand in good condition and health. Species Composition and Habitat Preferences In the stands of Salix, Populus, and Miscanthus × giganteus, 98 vascular plant species were found. These comprised 82 herbaceous species and 16 woody species (11 in Tordis, 6 in Inger, 5 in Pegaso, and 5 in miscanthus). The highest number of species was recorded for willow varieties Tordis and Inger (58 identically), and the lowest was recorded for poplar variety Pegaso (45). There were 16.33% woodland species, 10.20% arable-land species, and 16.33% grassland species found in the understory of the studied plots, with the highest percentage of ruderal species (28.57%). The rest can be classified as common species of arable lands and ruderal sites or that of grasslands and ruderal sites. The habitat preferences of different weed species in the studied plots are given in Table 1. The weed floras had different similarities in species composition ( Table 2). The largest similarity was observed between willow variety Inger and Miscanthus × giganteus, and the lowest was between willow variety Tordis and poplar variety Pegaso, and poplar variety Pegaso and Miscanthus × giganteus. These big differences can be caused by the different utilization and allocation of sources and dissimilar growth and harvest frequency of trees and tall grasses. The following species occurred in the understory of all four sampling plots: Chenopodium album, Cirsium arvense, Convolvulus arvensis, Elymus repens, Epilobium hirsutum, Equisetum arvense, Lactuca serriola, Lathyrus tuberosus, Lepidium draba, Prunus padus, Stenactis annua, Symphyotrichum novi-belgii, Symphytum officinale, Taraxacum officinale, Torilis japonica, Tripleurospermum inodorum, Veronica persica, and Viola arvensis. Life Cycle Perennial species (45.92%) were dominant among all vascular plant species. The proportion of annual species was 28.57%, that of biennial species was 9.18%, and that of trees and shrubs was 16.33%. The highest proportion of perennial species was found for Pegaso (55.55%), and the lowest was found for Tordis, with 43.4%. The highest share of trees and shrubs was recorded for Tordis, with 22.41%. Ground-Flora Evaluation According to Ecological Demands of Species The occurrence, growth, and life cycle of weeds in stands of different energy-tree and -grass varieties were partially influenced by different kinds of changed environmental features (changes in light intensity near the ground during the growing season, water balance, material flows, etc.), which generated changes in the make-up and phenotypic plasticity of organs, the phenology of species, and the vitality and development of generative organs. Shade-tolerating to semi-shade-tolerating species were tree species Acer pseudoplatanus, Fraxinus excelsior and Prunus avium, and herb species Geum urbanum. The average values of Ellenberg's indicator numbers for weeds in different energy crops ranged from 6.81 to 6.98, suggesting that most species had higher light requirements, often growing in full light, but they tolerated the shade in energy plantations well. • Temperature requirements. Only one species, Juglans regia, belongs to a species with high-temperature requirements, and one species, Senecio nemorensis, is a species with low-temperature requirements. 5.07, which indicated that most occurring species require moist soils (lacking in wet and in frequently dry soils). One species required wet soils (Silene baccifera), and three species were drought-tolerant (Euphorbia cyparissias, Lepidium draba, and Veronica spicata). • Soil-reaction requirements. The average indicator values for each variant ranged from 6.79 to 7.1, which indicated species that mostly prefer alkalic soils (pH 6.5-8). Mentha longifolia was one species demanding soils very rich in calcium, and Viola canina is a species that requires acidic soils. • Nitrogen requirements. We identified three species that occur in soils with high nitrogen concentrations (Calystegia sepium, Sambucus nigra, and Symphyotrichum novi-belgii), and three species of soils that occur in areas poor in nitrogen (Hypericum maculatum, Veronica spicata, and Viola canina). The average indicator values for each energy-plant stand was in the range of 6.39-6.7, so most species were typical species occurring in medium-to-rich nitrogen soils. The principal components analysis (PCA) of relationships between the Ellenberg indicator values of ground-flora species in different plots ( Figure 2) showed that all factors had approximately the same driving strength (importance) for our computation within the studied plots (cf. the same length of axes). Furthermore, the closest statistical relationship in terms of species composition was between light intensity and continentality, and between temperature requirements and soil mineral nitrogen content. The moisture was correlated with temperature and mineral nitrogen negatively and its relation to other analyzed factors was almost neutral (slightly positive to pH and slightly negative to light intensity and continentality). The ground flora of the Tordis variety was relatively poorly influenced by its specific environmental conditions. The herbaceous layer of the Inger variety was mainly defined by soil reaction. The vascular plants in the undergrowth of both Miscanthus × giganteus and poplar trees (Pegaso) had the greatest affinity to mineral nitrogen content and temperature requirements. Weed-Species Harmfulness Fifty percent of all spontaneous plant species in the studied energy plantations are considered weeds in Slovakia. The "very dangerous" category represented 46.94% of weeds, the "less dangerous" category represented 51.02%, and the "nondangerous" category represented 2.04% of the 49 species. "Very dangerous species", such as Chenopodium album, Cirsium arvense, Convolvulus arvensis, Elymus repens, Equisetum arvense, Lepidium draba, Taraxacum officinale, and Tripleurospermum inodorum, were found in all studied plots. The highest share of "very dangerous weeds" was found on plots with poplar Pegaso (38.78% of all weeds in the variant), it was less in willow stands (32.65% in Inger and 28.57% in Tordis), and the smallest proportion was recorded in miscanthus (26.53%). The most abundant "very dangerous" weed species were Cirsium arvense, Convolvulus arvensis, Equisetum arvense, Galium aparine, Lepidium draba, and Taraxacum officinale. Moreover, 61.20% of arable-land species belonged to this category. Discussion Several authors point to the fact that SRC and perennial energy-grass stands (willow, poplar, and miscanthus) can positively affect local biodiversity [3,9,32,33]. On the other hand, our results showed that half of vascular plants in understory could be classified as weeds, and weeds can be harmful (see also [4]). In our research, the total number of vascular plant species was 98. Comparable results were reported by Baum et al. [34], who listed 79 and 98 species in Swedish and German research plots, respectively. In Poland, 63 and 68 spontaneous vascular plant species were found [4]. We observed 53 species in the miscanthus stand. Clapham and Slater [10] recorded 31 weed species within the miscanthus plantation, with Epilobium montanum, Ranunculus repens, and Rumex obtusifolius being the most frequently observed. Several studies [35][36][37], point to the fact that SRC species, as well as their varieties, influence the understory composition. We assume that this influence is due to different environmental conditions created by SRC crops. In some cases, seasonal meteorological conditions were more important than the taxonomic or genetic identity of energy crops [38]. The comparison of plant-species habitat preferences showed the largest proportion of ruderal species in the SRC plots, which does not correspond with the findings of Baum et al. [34], who recorded the largest proportion among grassland species. Weed floras in the tree and herb energy crops of Poland included 78% and 79% segetal and ruderal species, 16% and 18% meadow species, and 3% and 6% forest species (averages from two different time scales) [4]. The overall percentage of woodland species was comparable to the findings of Verheyen et al. [39], with approximately 16%. The percentage of segetal species in the SRC stands indicated their penetration from the surrounding agricultural land. The most dominant species in the understory were perennial species, comprising 45.92% of all vascular plants in the plantations. Baum et al. [3] found similar perennial species to be the most common species in SRC research sites in Sweden and Germany (e.g., Cirsium arvense, Elymus repens, Taraxacum officinale, and Urtica dioica), but perennial-species composition in the study sites of willow SRCs in central Latvia was more different [14]. In Poland, the share of perennials in the tree and herb energy crops was 37% and 40% (averages from two different time scales) [4]. Regarding life forms, therophytes and hemicryptophytes presented the highest proportion, comprising 27.66% of the observed communities. This does not correspond with the results of Welc et al. [40], who recorded an average of 63% hemicryptophytes and 8% therophytes during the five growing seasons. The abundant occurrence of hemicryptophytes in energy crops indicates the missing soil disturbance (according to Welc et al.) [40], and the presence of therophytes can be explained by continuous expansions from the adjacent arable lands (energy plantations in our research were of smaller size and consequently more susceptible to airborne and other weeds). Lososová et al. confirmed that crop stands whose management involves less disturbances, such as cereals, harbored less geophytes and had higher species richness [41]. Plant species in energy stands have adapted to a wide range of environmental conditions. In some cases, light-demanding weeds prevailed [14,37]. A study by Archaux et al. [36] showed that soil moisture and soil nitrogen were major determinants of plant communities. Other authors report on the high share of weeds typical for fresh moisture soils in SRC [14,39]. In our research, the Ellenberg indicator values for mineral nitrogen content in the soil, most ground-flora species in the majority of plots were typical species requiring medium-to-rich nitrogen soils, as per the results of Verheyen et al. [39]. We identified 50% of weed species, 46.94% of which were harmful to agriculture. These results significantly differ from those of Verheyen et al. [39], who found few harmful weeds in SRC plots. The localities of endangered, rare, or protected plant species are priority habitats for conservation, and a short-rotation coppice can harbor such species (c.f. Fry and Slater [42]). In some cases, few rare species were found in SRCs [7], but their occurrence was not recorded during our study. Conclusions The ground flora of energy plantations is composed predominantly of synanthropic plants with a weedy character with the potential to colonize new areas. There are differences in the spontaneous phytodiversity between different species and varieties of energy plants, but the basic character of spontaneous vegetation remains stable. Similarities of weed floras between different sampling plots varied in a wide range (from 0.288 to 0.586 according to the Jaccard index). Despite several years of research, it is relatively difficult to evaluate the ground flora of SRC and energy-grass plantations, as species composition and its dynamics are influenced by different local and seasonal factors that have not been studied. After our nine-growing-season-lasting research, we can confidently state the following: • the total number of vascular plant species in SRC and miscanthus-stand understories (all plots) was 98. The highest number of species was recorded for the research plots with Tordis and Inger willow, and the lowest for the Pegaso poplar. Perennial species dominated, 10 of which were found in all plots. Ruderal therophytes and hemicryptophytes prevailed; • weeds represented 49 species among all identified vascular plant species in the SRC and miscanthus understory, 46.94% of which were very harmful. Poplar plantation hosted the highest number of "very dangerous weeds", the least of them was observed in miscanthus stands (willow stands were intermediate). Non-native invasive species were also recorded in the ground flora; • according to ecological requirements of weeds, most species have higher light requirements. For temperature requirements, the presented species are predominantly mild-to-warm suboceanic species. Species that prefer freshly moist alkalic soils that are medium-to-rich in mineral nitrogen predominated; • weeds of the Tordis willow variety were the least influenced by the environmental conditions of the taxon of the energy plant. Weeds of the Inger willow variety were mainly defined by the soil reaction. Weeds in the undergrowth of miscanthus (Miscanthus × giganteus) and poplar trees (Pegaso) had the greatest affinity to soil nitrogen content and temperature requirements. This research was conducted on experimental SRC and energy-grass plantations in southern Slovakia under specific natural conditions. Therefore, the results achieved in our research cannot be generalized for other areas/regions. Nevertheless, these results can broaden the inconsistent and mosaical knowledge on this new source of weeds on farmlands. The long-term and seasonal fluctuations have not been analyzed, as their field study needs a longer time span. In spite of these limitations, the analysis of which is a future task, our research confirmed the importance of the taxonomical identity of cultivated energy plant for weed composition in energy plantations in the same environmental conditions. It underlines the importance of the identification of the most harmful weeds for each energy plant species or variety for the regional climate and soil conditions. Author Contributions: Conceptualization, E.P., P.P., L.K., and A.F.; methodology, L.K. and A.F.; software, L.K. and A.F.; validation, E.P. and L.K.; formal analysis, L.K. and E.P.; investigation, A.F., P.P., and L.K.; resources, A.F., P.P., and L.K.; data curation, A.F. and L.K.; writing-Original-draft preparation, E.P. and A.F.; writing-Review and editing, E.P., L.K., A.F., and P.P.; visualization, L.K. and P.P.; supervision, P.P.; project administration, L.K.; funding acquisition, L.K. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the Grant Agency of the Faculty of European Studies and Regional Development, Slovak University of Agriculture in Nitra, Slovakia, grant number 08/2017. Conflicts of Interest: The authors declare no conflict of interest.
2020-08-27T09:15:03.066Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "166fd7001f702d602529b54909bc91c0852a485f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/10/9/1247/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "406eac2632a4f70f6510e9e6bce731afd10e27ea", "s2fieldsofstudy": [ "Environmental Science", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
4348082
pes2o/s2orc
v3-fos-license
Implementation of a seven-day hospitalist program to improve the outcomes of the weekend admission: A retrospective before-after study in Taiwan Objective Patients admitted during weekends may have worse outcomes than those during weekdays. Adjusting the practice of senior physicians over weekends may reduce the weekend effect. Design A controlled before-after study, with propensity score matching (PSM) for potential confounding variables, to compare outcomes between weekday and weekend admissions. Setting A 2000-bed medical centre in Taiwan Participants Hospitalised general medicine patients cared for by traditional internal medicine teams (pre-intervention cohort) and those cared for by hospitalists after introducing a seven-day hospitalist program in the first six-month (post-intervention cohort) and following three-year periods. Main outcome measures Proportion of intensive care unit (ICU) admissions, cardiopulmonary resuscitation (CPR) events, and in-hospital mortality. Results The pre-intervention cohort included 982 patients. Significantly higher mortality rates (11.3% vs. 6.2%, p = 0.032) were recorded in the case of weekend admissions, with similar proportions of ICU admission and CPR events. The post-intervention cohort included 601 patients. No significant difference was recorded in any of the main outcomes between weekday and weekend admissions. PSM for pre-intervention and post-intervention cohort showed shorter LOS after intervention, with no difference in ICU admission, CPR, and morality for the weekday and weekend admissions, respectively. The three-year cohort that followed, consisting of 3315 patients, showed no difference of outcomes between weekday and weekend admissions. After PSM, there were no significant differences in ICU admission rates (1.0% vs. 1.8%), CPR (0.3% vs. 0.2%) events and hospital mortality rates (8.1% vs. 8.5%), when weekday and weekend admissions were compared. Conclusions The seven-day hospitalist program shows potential in providing equally safe care for both weekday and weekend general medicine admissions with sustainable development. Introduction Several studies have reported that higher mortality rates were observed in patients admitted to hospitals on weekends, especially in the case of emergency admissions [1][2][3]. Possible explanations for this 'weekend effect' may include differences in staffing [4], the unavailability of important procedures [5] and variations in the patient cohorts [6]. It has been expected that focussing on the working practices of senior hospital doctors at the weekends could improve care quality and reduce the weekend effect [7]. Over the past two decades, the hospitalist system has become the dominant inpatient care model in the United States [8,9]. Numerous studies have demonstrated the advantages of using hospitalists to care for general medical [10,11], paediatric [12], stroke [13] and surgical patients [14,15]. Potential advantages of the hospitalist system include greater expertise in inpatient care and better availability of physicians during the duration of hospitalisation [16]. It has been hypothesised that weekend effect results from different staffing between weekdays and weekends [4], but hospitalists are typically available at weekends. Therefore, introducing hospitalists into the traditional inpatient care model can potentially provide equal quality of care during weekends because of the typical 24-hours/7-day coverage [17]. In late 2009, Taiwan established a pioneer hospitalist program for acute general medicine admissions, which produced similar outcomes of care. Higher efficiency in throughput was observed, when compared to traditional internal medicine practices [18]. However, improvements in quality indicators and outcomes have been inconsistent [16,19]. Current literature does not determine whether the hospitalist system can waive from the weekend effect. The research hypothesis was that introducing a hospitalist system would make consistent outcomes for both weekday and weekend admissions. Therefore, this study aimed to investigate the outcomes of general medicine hospitalisations before and after introducing a 7-day hospitalist system. Study setting The study was conducted at a 2000-bed medical centre in North Taiwan. The idea of the hospitalist program was conceived in October 2009. As part of this, a pioneering, hospitalist-run acute general medicine unit had both attending physicians and nurse practitioners (NPs) admitting general medicine patients from the emergency department (ED). The outcomes of the first three months were reported [19]. In contrast to traditional academic medical services, the hospitalist program did not have resident physicians to take care of inpatients, except during the night shift, when there was resident physician-coverage under hospitalist supervision. In January 2010, three shifts were designed for the hospitalist team, for the weekdays. New admissions from the ED, which typically presented between 11 am and 5 pm, were assigned to both the day shift (8 am to 5 pm) and the bridge shift (1 pm to 11 pm) hospitalists. The night shift was from 11 pm to 8 am; the hospitalist on duty took handoffs from the bridge shift and covered a maximum of 36 beds over the course of the night. During the weekends, one hospitalist acted as the on-duty physician to both cover a 24-hour shift and admit new patients. The remaining hospitalists who had daytime duty made the rounds of their patients to maintain continuous care, and to also ensure that all inpatients could approach their daytime attending hospitalists at any time during their stay. The role of nurse practitioners (NPs) in the hospitalist program was to collaborate with hospitalists to provide care to hospitalised patients. NPs could help hospitalists to evaluate the patients' conditions, arrange examinations that should be performed outside the ward, contact the consultants and document records. They could also perform nursing procedures including wound dressing, and nasogastric tube and urinary catheter insertions. The hospitalist ward run 7 days per week, and the workload was not significantly low during weekends. This is why we needed to maintain the same staffing of NPs during weekends. The staffing structure of the hospitalist program, on both weekdays and weekends, was presented in comparison with that of the traditional general medicine (Table 1) structure. The patient-to-nurse ratio, consultation service, and social supporting system were the same in these two systems. Data collection This study was based on a longitudinal hospitalist performance research study approved by the Research Ethical Committee of the National Taiwan University Hospital (NTUH, 201112161RIC). The patient data, before July 2011, was retrospectively collected from medical records. Since then, each admission to the hospitalist general medicine unit was prospectively collected. There were only missing data of the laboratory results because some tests were not mandatory for each patient on admission. Therefore, laboratory data were used descriptively without comparison. The study design and methods comparing weekday and weekend admissions were approved by the Research Ethical Committee of the NTUH (201308025RINA), which waived the requirement for informed consent. Written consent from study objects was not obtained and confidentiality assurances were addressed by abiding to data regulations of the NTUH. Weekend admission was defined as an admission between 12:00 am on Saturday and 11:59 pm on Sunday. Patients admitted during the remaining time periods were considered weekday admissions [4]. The baseline characteristics of each patient were recorded, including age, gender, admission Barthel index (BI) and admission diagnosis. The BI was used to rate the patients' activities of daily living on admission [20]. Since the study used primary patient-level data, laboratory data on admissions was also compared between groups to evaluate disease severity. Measurements The primary outcome variables were the length of hospital stay (LOS), proportion of intensive care unit (ICU) admissions, cardiopulmonary resuscitation (CPR) events and hospital mortality rates. The pre-intervention group consisted of a comparable patient group cared for by traditional internal medicine models, just before the implementation of the hospitalist system. From November 1, 2009 to December 31, 2009, patients admitted from the ED to the seven traditional general medicine wards (with a total of 239 beds) were sampled. The medical records of the patients were retrospectively reviewed. The post-intervention group included all the patients admitted to the then-new hospitalist-care ward from January 2010 to June 2010. Baseline characteristics, hospital course and outcomes were analysed, in both the pre and post-intervention cohorts, following the same steps. These analyses focused on the differences between weekday and weekend admissions, rather than on the absolute value of each outcome. To evaluate the long-term effect of intervention, a three-year cohort-from January 2010 to December 2012-was also collected for analysis. By using medical records of hospitalisation, there was no missing data or loss to follow-up. Statistical analysis Data was analysed using the SPSS software (version 16.0, SPSS Inc., Chicago, IL, USA). Inter-group differences were compared using the independent t-test for continuous variables; categorical variables were compared using the Fisher's exact test. Acute medical admissions, unlike either elective surgery or scheduled admissions, occurred on both weekdays and weekends. Because of the inherent differences between weekday and weekend admissions, in terms of baseline characteristics, we used the propensity score matching (PSM) with a ratio of 1:1 to adjust these differences. Propensity scores were estimated using a logistic regression model of the patient demographics and clinical characteristics in the three-year follow-up cohort. The covariate balance between the matched groups was examined. Descriptive analyses were repeated to compare outcome variables. Because of non-parallel comparison, outcomes of weekday and weekend admissions were also compared between pre-and post-interventions using the PSM technique. The statistical significance was set at a two-sided p<0.05. Main outcomes of the pre-and post-intervention groups From November 1, 2009 to December 31, 2009, a total of 982 patients admitted from the ED were enrolled. The clinical courses between weekday and weekend admissions showed that the LOS was similar. The proportion of ICU admissions (1.4% vs. 1.2%) and CPR events (1.4% vs. 0.6%) during hospitalisation were slightly higher in weekend admissions, but were statistically insignificant. However, the crude rate of hospital mortality was significantly higher in weekend admissions compared to weekday admissions (11.3% vs. 6.2%, p = 0.032) ( Table 2). The post-intervention, six-month cohort included 601 patients. A similar LOS was recorded in both weekday and weekend admissions. In addition, there was no significant difference in ICU admission rates, CPR events and hospital mortality rates (Table 2). However, the proportion of ICU admissions was higher among weekday-admitted patients after the introduction of hospitalist program. This condition may be related to the difference in BIscores of both weekday and weekend admissions between the pre-intervention and post-intervention groups. Our results given in Table 2 showed that the post-intervention group had lower mean scores of Barthel index on both weekday and weekend admissions (44.1 and 48.6, respectively) compared to the pre-intervention group (66.7 and 62.8, respectively). PSM for the pre-intervention and post-intervention group showed comparable demographics, clinical characteristics and laboratory data after PSM. ICU admission, CPR events and hospital mortality showed no statistical difference for both weekday and weekend admissions. However, LOS were significant shorter in post-intervention group, both for weekday admissions (15.4 vs. 8.8 days, p<0.001) and weekend admissions (10.3 vs. 12.3 days, p = 0.037) ( Table 3). Evaluation of the long-term effect of intervention A total of 3315 patients, from January 1, 2010 to December 31, 2012, were admitted to the hospitalist-care ward. There was no difference between the two groups, with regards to the LOS (10.4 vs. 10.9, p = 0.122), proportion of ICU admission (2.9% vs. 2.0%, p = 0.156), CPR (0.4% vs. 0.7%, p = 0.259) events and hospital mortality rates (6.8% vs. 7.9%, p = 0.283) ( Table 4). Fig 1 depicts the ICU admission, CPR event and hospital mortality rates for the pre-intervention, post-intervention and three-year cohorts, respectively. As shown by clinical characteristics (Table 4), weekend admissions had a lower BI when compared to weekday admissions. Because of different baseline characteristics and the diagnosis distribution between weekday and weekend admissions, the PSM technique was used to allow for a fair comparison. Age, gender, and BI at admission were entered into the logistic Table 3. Results of propensity score matching for care on weekday and weekend admissions between the pre-and post-intervention cohort. Seven-day hospitalist program to improve the weekend admission outcomes model to generate the propensity score. After PSMwas done, both the matched 496 weekday and 496 weekend admissions showed similar patient demographics and clinical characteristics ( Table 5). Post-intervention (n = 90) The clinical course and outcome indicators of the two matched groups were compared (Table 5). There were no significant differences in the ICU admission rates (1.1% vs. 1.8%), CPR events (0.3% vs. 0.2%) and hospital mortality rates (8.1% vs. 8.5%) in the cases of weekend and weekday admissions. Compared to the crude rates, the differences of the indicators were even closer after matching propensity scores. However, the mean hospital LOS was 1.4 days longer in weekend admissions when compared to weekday admissions (p = 0.013). Discussion This study demonstrates the performance of a newly implemented hospitalist program on weekend general medicine admissions in Taiwan. No statistical difference was found between weekday and weekend admissions in the pre-and post-intervention groups apart from a shorter LOS in the post-intervention group. In addition, the hospitalist-care ward showed no difference in crude rates of hospital mortality, ICU admission and CPR events between weekday and weekend admissions. These effects could be sustained during a three-year follow-up after the implementation of the 7-day hospitalist program, even after PSM analysis. Initially, hospitalists reported cost savings as a result of uncompromised patient outcomes [18]. The performance of the 24-hour/7-day hospitalist program during weekends is a reflection of the hospitalists' effort and value. The reason hospitalists maintain outcomes of weekend admissions are most probably the inherent around-the-clock coverage feature. In the past, a hospitalist has varyingly been defined as a physician who spends at least 25% of his time in the hospital [22]. However, most hospitalists presently spend far more time in the hospital. The concept of continuous inpatient care in the hospitalist model explains the derivative advantage brought in by hospitalists. Moreover, prior research demonstrated that hospitalists' value in leading, coordinating and participating in multi-disciplinary teams, to rapidly address and identify acute conditions [23], as well as safely hand off patients [24]. Although patient care outcomes may be maintained by hospitalists, several examinations and key procedures are unavailable during weekends in most institutions [5]. Thus, efficiency of the hospitalists still depends on several non-technical factors in the hospital. In addition, the results of this study indicated that patients had a significantly shorter LOS in the post-intervention group. The complexity of inpatient care and workload of medical personnel may influence the LOS of patients. However, the difference in LOS between weekend and weekday admissions after the implementation of the hospitalist system is still unclear and warrants further research. Several clinical implications from the study may be used to improve weekend inpatient care. First, this study demonstrates that the implementation of the hospitalist system may provide the same or similar levels of care to weekday-and weekend-admitted patients in the general medicine cohort. Second, there is abundant literature addressing the weekend effect; however, previous studies focus on the phenomenon rather than the solution approach [25]. Not only is it important to note if the weekend effect exists primarily for a certain disease, but it is also mandatory to determine ways to deal with this. For example, adjustment of the weekend staffing model may be valuable for investigating. Third, as is consistent with previous studies, severity and comorbidity tend to be higher in weekend admissions [26][27][28]. The reasons for these findings were not investigated in the present study. However, a statistical adjustment before making comparison between groups is needed. Moreover, several studies have shown an increased LOS in weekend-admitted patients [5,28]. Unadjusted data may be not adequately comparable because comorbidity affects mortality, which in turn can affect the LOS. Therefore, a methodology with either a confounder adjustment or matching design should be applied for appropriate comparisons. Forth, researchers may use a variety of quality indicators, rather than mortality alone, to evaluate the weekend effect [21]. It is because with the passage of time, diseases and new illnesses may dilute any effect of the admission time itself [29]. Finally, the effect of the intervention may be sustainable [30,31] or non-sustainable [32]. Using the data on the implementation of the new hospitalist-care ward over a period of three years showed a sustainable effect on the hospital course and outcome of patients for weekend admissions. The finding implies that redesigning the staffing model during the weekend compared with weekdays could be a sustainable and practical route. Strengths and limitations of this study This study presented both the immediate and long-term intervention effects, and matched the pre-and post-intervention, and the weekday-and weekend-admitted cohort for fair comparisons. In addition, several previous studies on the weekend effect have used administrative data [2,5,6], but it has recently been acknowledged that the administrative data was originally collected for non-research purposes [33]. The lack of clinical details, admission route [34] and uncertainty of coding may cause biased results [35]. Although this is a single-institution study, the emergency admission route, detailed clinical course and comparable primary laboratory data may make the study unique, when compared to studies based on secondary analysis. This study has several limitations. First, it is an observational study with the PSM technique having been done for different characteristics where unmeasured confounders may exist. Although it appears comparable at baseline between the pre-and post-intervention groups, the effect size estimate was still at risk of bias due to residual confounding. We also considered the extent of potential bias from unmeasured and unknown confounders, since the present study did not control for inevitable variations. In addition, of note, PSM may lead to some patients being excluded from the final analyzed sample because they do not have a match within the specified interval on the propensity score. Second, every system has a unique design and instructions for quality control. The root causes of the weekend effect differ depending on the local context [7]. Hence, this single-centre experience cannot be extrapolated to other models and guarantee similar results. Third, it is a controlled before-after study and the patient characteristics might change over time. The new hospitalist-care ward had no controllable pre-intervention cohort because there had been no ward admitting patients exclusively from the ED. That was why we enrolled patients admitted from the ED to several wards with similar case mix in a selected time period, immediately before the hospitalist program, as the comparison cohort. Obviously, our results cannot confirm the earlier evidence on the existence of a persistent weekend effect in other departments of the same hospital or medical institutions. Forth, some outcome events, such as CPR, were extremely rare to tell any difference. We did not overstate that hospitalist system can mitigate these adverse outcomes with extremely low prevalence. Conclusions Apart from a shorter LOS in the post-intervention group, our study revealed no statistically significant differences in hospital mortality, ICU admission and CPR events between weekday and weekend admissions in the pre-and post-intervention groups. In addition, the 7-day hospitalist program may provide equally safe care to weekday-and weekend-admitted patients with sustainable development. This finding suggests that awareness of the weekend effect may have increased deliberate practice and vigilance in the hospital.
2018-04-03T03:46:55.285Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "fb1c0b9b35d93bee3be6222411c55cd0c4935e7b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0194833&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb1c0b9b35d93bee3be6222411c55cd0c4935e7b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254959134
pes2o/s2orc
v3-fos-license
Novel Meta-Learning Techniques for the Multiclass Image Classification Problem Multiclass image classification is a complex task that has been thoroughly investigated in the past. Decomposition-based strategies are commonly employed to address it. Typically, these methods divide the original problem into smaller, potentially simpler problems, allowing the application of numerous well-established learning algorithms that may not apply directly to the original task. This work focuses on the efficiency of decomposition-based methods and proposes several improvements to the meta-learning level. In this paper, four methods for optimizing the ensemble phase of multiclass classification are introduced. The first demonstrates that employing a mixture of experts scheme can drastically reduce the number of operations in the training phase by eliminating redundant learning processes in decomposition-based techniques for multiclass problems. The second technique for combining learner-based outcomes relies on Bayes’ theorem. Combining the Bayes rule with arbitrary decompositions reduces training complexity relative to the number of classifiers even further. Two additional methods are also proposed for increasing the final classification accuracy by decomposing the initial task into smaller ones and ensembling the output of the base learners along with that of a multiclass classifier. Finally, the proposed novel meta-learning techniques are evaluated on four distinct datasets of varying classification difficulty. In every case, the proposed methods present a substantial accuracy improvement over existing traditional image classification techniques. Introduction Multiclass classification aims to assign instances of data to one of three or more classes. In a conventional learning process for multiclass classification, considering that there are k > 2 classes, i.e., Y = [C 1 , C 2 , . . . , C k ], and n training instances, i.e., S = {(x 1 , y 1 ), (x 2 , y 2 ), . . . , (x n , y n )}, each training instance belongs to one of k distinct classes, and the objective is to build a function f (x) that, given a new data instance x, can accurately predict the class to which the new instance belongs. In the real world, image classification [1], text classification [2], e-commerce product categorization [3], medical diagnosis [4], and other multiclass classification challenges are quite prevalent. Multiclass decomposition separates a multiclass classification problem into a group of independent binary learners and re-composes them by combining their outputs to reconstruct the multiclass classification results. There are numerous concrete representations of decomposition methods, such as onevs.-rest (OvR) [5] and one-vs.-one (OvO) [6]. Notably, OvR trains k unique base learners for the ith of which the positive examples are all cases in class C i and the negative examples are all instances not in C i ; OvO trains k(k − 1)/2 base learners, one for each pair of classes. Even while OvR and OvO are straightforward to implement and widely used during practice, they yield several apparent drawbacks. First, both OvR and OvO are based on the assumption that all classes and their corresponding base learners are independent, ignoring the latent association between these classes in practical situations. For example, in an image classification task for waste recycling, instances under the "paper" class appear to have a higher association with instances under the "cardboard" class than instances under the "plastic" class. In addition, the training of OvR and OvO is ineffective because of their high computational complexity when k is significantly large, which results in a prohibitively expensive training cost when processing large-scale classification datasets [7]. To alleviate and exploit some of the shortcomings mentioned above, in this work, meta-learning approaches are employed to combine the outputs of the base learner more efficiently to obtain a superior outcome. Several implementations solve this multiclass classification problem, each taking a different approach in terms of algorithms or how they integrate data from various networks under the umbrella of the ensemble learning family of techniques. Initially, some methods follow a decomposition strategy, employing smart criteria to separate the information [8,9] or various mathematical tricks to find the most suitable binary classifier [7,10]. There is also the issue of information transferability with each new piece of data or when combining networks. We should not forget that there are multiple approaches applied at the level of decision rules [11][12][13], known as ensemble techniques, which aggregate opinions from different methods or networks into clusters with parallel training [14]. Finally, other studies focus on combining expert techniques that utilize a generalizer to coordinate information allocation for the ultimate decision [15,16]. In this research, we examine the topic of image classification using deep learning to determine appropriate image classification algorithms for multiclass problems. Specifically, we put forward several classification techniques based on ensembles of deep neural networks, trying to increase the accuracy of the multiclass classification problem. We examine and assess several meta-learning techniques/approaches. Each multiclass problem can be divided into smaller binary problems to have a better separation between classes and distinct boundaries with regard to the features of each class. In general, we consider an ensemble composed of the primary multiclass classifier supplemented by individual components trained with a focus on a single class (OvR). The application novelty of this approach is that we can blend various multicenter datasets with a diverse focus on the material of interest in a single ensemble scheme. Classifiers are trained in two stages, one for individual classifiers and the second for decision aggregation. The second layer may be implemented using gating functions (with a Bayesian formulation) or as a meta-learning (neural network) technique trained on a dataset containing the primary classes. Our main contributions are the following: • A series of novel approaches for combining the output of a decomposed multiclass (image) classification task under the umbrella of the one-vs.-rest framework for the opinion aggregation module. • A novel opinion aggregation method that combines information derived from subclass classifiers based on the Bayes theorem. The novelty lies within two key parts: -The first part is the usage of multiple unrelated multicenter datasets to train the expert sub-class classifiers. -Secondly, on the way, it calculates the inputs of the final meta-learner level using the baseline multiclass classifier along with the output of the expert sub-class classifiers using the Bayes theorem. • A novel design for the mixture of expert approaches that incorporates the knowledge of a multiclass classifier as a gating model. • In addition, we put forward two stack generalization variants with novel characteristics that follow the one-vs.-rest architectural paradigm: - The first one divides the initial problem into n separate classifiers and uses a generalizer to learn the optimal weight polic y. - The second one is similar to the first, but it additionally feeds the generalizer with the output of the baseline multiclass classifier. • We expand on the concept of shared wisdom from data and explore how various datasets (such as hierarchical labeled datasets or even simply related datasets) can be combined to improve accuracy and create a stable network architecture. • The methods proposed in this study can also be used to train transfer learning models for even more accurate and generalized classifiers or even for different learning tasks. • Last but not least, this framework can be used to apply a series of black-box optimizations, potentially with varying weights and parameters and based on various intuitions and scenarios of interest. The remaining sections of this work are structured as follows. In Section 2, we review related research and provide the necessary background. In Section 3, the metalearning phase of the proposed methodology is described, which employs a one-vs.-rest decomposition method with diverse opinion aggregation mechanisms. The experimental design of this research is then described in Section 4.1. The experimental results are reported in Section 4.2, while conclusions and future work are discussed in Section 5. Background In this section, the necessary background information is provided. In detail, several multiclass classification learning techniques are presented. We are starting by introducing traditional ensemble learning techniques, followed by voting ensembles and more sophisticated extensions of voting ensemble learning, mixture of experts techniques, and stacked generalization variants. Finally, in the last section, we outline significant developments and related work in multiclass classification utilizing machine learning algorithms, most notably CNN techniques, to offer context for the current work in this area. Ensemble Learning Ensemble learning is a technique for constructing and combining multiple models, such as classifiers or experts, to solve a particular computational intelligence problem [17]. Typically, ensemble learning is employed to improve the performance of a model (classification, prediction, feature approximation, etc.) or to counteract the tendency of a weak model. Ensemble learning can also be used to assign confidence to a model's decision features, combine data, conduct incremental learning, non-stationary learning, and error correction, and select optimal (or near-optimal) features [18]. While this article focuses on ensemble learning applications for classification, many of the fundamental ideas discussed can be easily applied to approximation and prediction-related topics. An ensemble-based design is achieved by combining many models, henceforth "classifiers". These systems are, therefore, sometimes referred to as multiple classifier systems or ensemble systems [19]. Several situations in which ensemble-based methods make mathematical sense are elaborated on in this section. A real-world analogy of ensemble learning is the procedure before a critical medical assessment. Patients typically seek the advice of multiple physicians before committing to a crucial medical procedure, read customer reviews before purchasing a product (especially an expensive plane ticket), and evaluate potential employees by reading their references, among other practices. In each situation, the final evaluation will be determined by weighing the diverse perspectives of a group of experts. The primary objective is to prevent the undesirable possibility of an unnecessary medical procedure, a defective product, or an unskilled employee. Voting Ensembles The voting ensemble (or "majority voting ensemble") is a machine learning ensemble that combines several models' predictions. It is a technique that can be employed to improve model performance, particularly to the point where it outperforms all other models in the ensemble. The ensemble voting algorithm combines the predictions of multiple models. It is appropriate for classification and regression tasks. In regression, this involves estimating the average sample predictions. In terms of classification, the guesses for each category are added together, and the label with the most votes is selected. The voting ensemble technique is a potent instrument that comes in handy when a single model is biased [20][21][22]. In addition, the voting ensemble may produce a higher overall score than the best base estimator because it combines the predictions of multiple models and attempts to compensate for the flaws of the individual models. Diversifying the base estimators as much as possible is one method for increasing the efficiency of the ensemble. As depicted in Figure 1, the base learners will be distinct pre-trained, fine-tuned models using the same dataset. Voting classifiers typically employ two distinct voting methods: • Hard Voting: Every classifier will vote for a particular class, and the majority will prevail. In mathematical terms, the desired target label of a set corresponds to the mode of the distribution of independently predicted labels. • Soft Voting: Each classifier assigns to each data point a probability that it belongs to a particular target class. Predictions are summarized and weighted based on the value of the classifier. Then, the vote is cast for the preferred label with the highest probability after weighting. Ensemble voting does not ensure that its results will be superior to any other model utilized by the ensemble. If any proposed method outperforms the voting ensemble, it is assumed that the outperforming method will be adopted. A voting ensemble is incredibly beneficial for machine learning models that use stochastic learning methods and generate a unique final model each time they are trained on the same dataset. For example, neural networks utilize stochastic gradient descent to identify the optimal solution. When multiple instances of the same machine learning algorithm are combined with slightly different hyperparameters, voting ensembles are also particularly effective. Meta-Learning Techniques: Stacked Generalization A drawback of the voting framework is that each model must contribute equally to the final prediction. This could be a problem if some models perform poorly in some conditions but admirably in others. To address this issue, the literature suggests a voting ensemble extension employing a weighted average or a weighted voting system for the contributing models. This is typically referred to as "blending" [23]. When using a machine learning model to determine how much to assist each model when making predictions is yet another expansion. This is referred as "stacked generalization", or "stacking" [24] for short. A further generalization of this method involves substituting any learning technique for the linear weighted sum (e.g., a linear regression) model used to integrate the predictions of the sub-models, specifically by training a brand-new model to determine the optimal method for combining the contributions of each sub-model [25]. In contrast to bagging or boosting, which trains multiple versions of the same learner, stacking (or stacked generalization) creates a series of models using a variety of learning methodologies (e.g., one neural network, one decision tree, all decision trees). For example, think of the scenario where m models are trained on a dataset of n samples. We intend to train m binary classifiers h j sequentially to combine them later to select new instances x as their weighted majority vote. The model outputs are used to calculate the final prediction for any instance x: where the Level-1 meta-learner (e.g., a neural network) optimizes the weights w i of the Level-0 base learners. That is, the individual m predictions associated with each training instance x i are forwarded as training data to the Level-1 learner, as shown in Figure 2. Meta-learning, in general, refers to algorithms that acquire knowledge from other learning algorithms. This often entails employing machine learning algorithms that learn how to optimally aggregate the predictions of other machine learning algorithms in the field of ensemble learning. However, meta-learning may also refer to a researcher's human model selection and algorithm tuning on a machine learning task, which current automl [26] algorithms strive to automate. A simple method for combining multiple models under an ensemble scheme is calculating the mean of all sub-model outputs. Model averaging is an ensemble technique in which multiple sub-models contribute equally to an aggregated prediction. Model averaging can be enhanced by weighting the contributions of each sub-model to the aggregate prediction according to the individual performance of the sub-models. A non-weighted model averaging ensemble aggregates the predictions of various trained models. This strategy has the shortcoming that each model contributes the same proportion to the ensemble prediction, regardless of its performance. This method's variant, the weighted average ensemble, weights each ensemble member's contribution based on the model's confidence or expected performance on a holdout dataset. This enables models with better performance to contribute more, while models with poorer performance contribute less. The weighted average ensemble typically outperforms the model's average ensemble when the weights can be configured accurately. Mixture of Experts The mixture of experts (MoE) is a traditional ensemble architecture in which each expert is specialized in a specific subset of the input space or expertise domain. In this way, it is hoped to specialize experts on minor challenges, thereby resolving the original issue via a divide-and-conquer approach. MOE architecture is composed of N networks of experts. These experts are combined via a gating network that divides the input space proportionally. It employs a divide-and-conquer strategy managed by a gating network. Using a specialized cost function, the experts specialize in their respective subspaces. Utilizing experts' discriminative ability is superior to clustering. The gating network must determine how to distribute examples to different specialists. Such models have the potential to ease the development of more extensive networks that are inexpensive to compute during testing and more parallelizable during training. Precisely, the strategy consists of four key elements: • A task is divided into sub-tasks: The first step is to break the predictive modeling problem into smaller chunks. This frequently entails applying domain knowledge to the key problem, determining the task's natural division, and then deriving an effective approach from sub-solutions. • Develop an expert for each sub-task: Experts are typically neural network models that predict a numerical value in regression or a class label in classification because the mixture of experts approach was initially developed and studied in the field of artificial neural networks. Each expert is provided with an identical input pattern and tasked with making a prediction. • Utilize a gating model to decide which expert to consult: The gating network receives the input pattern that was provided to the expert models and outputs the allocation that each expert should make when generating a prediction for the input. Because the MoE effectively learns which portion of the feature space is learned by each ensemble member, the weights generated by the gating network are assigned dynamically based on the input. Simultaneously training the gating network and the experts in neural network models enables the gating network to determine when to trust each expert's prediction. Traditionally, this training method employed expectation maximization (EM). A "softmax" output from the gating network might provide each expert with a confidence score that is similar to a probability score. • Pool predictions and gating model output to make a prediction: Lastly, a prediction should be made using a pooling or aggregation strategy, which is carried out by combining expert models. Selecting the expert with the highest output or confidence provided by the gating network could be as straightforward as that. Alternately, a weighted sum prediction that explicitly incorporates each expert's prediction and the gating network's confidence estimation could be constructed. The training strategy generally seeks to accomplish two objectives: identifying the optimal gating function for a given set of experts and instructing the experts on the distribution indicated by the gating function. Related Work In this subsection, we explain several advancements in multiclass classification using machine learning algorithms, most notably CNN approaches, to provide context for the current related work. The works that will be presented are divided into four categories: (a) multiclass decomposing scheme, (b) incremental learning, (c) optimization techniques exploiting decision ruling on the outcome of primary classifiers, and (d) mixture of experts. Multiclass Decomposition Techniques Decision tree algorithms have been shown to be an effective technique in classification challenges. However, their classification performance is inadequate in multiclass contexts [27]. In a study [8], decision tree algorithms are integrated with the one-vs.-rest (OvR) binarization technique to increase the scheme's generalization capabilities. In contrast to prior research, which focused on aggregation methodologies, [8] focused on the process of developing base classifiers for the OvR scheme. The proposed method combines distribution information with permutation information derived from training data for each split point at the root or internal nodes. The initial discussion focuses on the Hellinger distance. The permutation information is then analyzed, and a new concept of split ratio is presented. The optimal splitting point is determined using the following principle: if a node contains more information about splitting points, it may be easier to make the right decision. In this context, a unique split criterion referred to as the splitting point correction matrix (SPCM) is presented, which can successfully address the unbalance issue created by the OvR scheme. The central argument of the survey "In Defense of One-vs.-Rest Classification" [7] is that a simple "one-vs.-rest" methodology is as accurate as any other method, provided that the underlying binary classifiers are well-tuned regularized classifiers, such as support vector machines. Using the best binary classifier available is, according to [7], the most crucial aspect of multiclass classification. Once this has been accomplished, it appears to make little difference which multiclass scheme is implemented; hence, a simple system such as OvR is preferred over a more complex error-correcting coding scheme or single-machine approach. In [9], the authors develop a one-vs.-one (OvO) training process for neural networks that teaches each output unit to differentiate between a specific pair of classes. Moreover, compared to the one-vs.-rest categorization system, this method produces more output units. The proposed architecture has an output layer with K * (K − 1)/2 output units and a shared feature learning component. Each output is trained to distinguish between inputs of two classes and ignore examples of other classes. They devised three steps to develop the OvO classification scheme: (a) creating a code matrix to transform the one-hot encoding to a new label encoding, (b) altering the output layer and loss function, and (c) altering the classification method for new (test) examples. To determine the benefits of the proposed method, they compared it to the outcomes of one-vs.-one and one-vs.-rest classifiers on three distinct plant recognition datasets and a dataset including photographs of multiple monkey classes. Two deep convolutional neural networks (CNN) architectures were constructed from scratch or using pre-trained weights: Inception-V3 and ResNet-50. It is reported in [9] that the one-vs.-one classification outperforms the one-vs.-rest classification when all CNNs are constructed from scratch. However, fine-tuning the two pre-trained CNNs using the one-vs.-rest method yields the best results, as each CNN was previously fine-tuned using this method. Continual Learning/Incremental Learning Vogiatzis et al. [11] introduce a novel image classification model that can efficiently differentiate among recyclable materials. They present the so-called "Dual-branch Multioutput CNN", a customizable convolutional neural network with two branches designed to (i) classify recyclables and (ii) further detect the type of plastic. The proposed architecture consists of two classifiers trained on two distinct datasets to encode complementary properties of recyclable materials. Densenet121, ResNet50, and VGG16 networks were used in the Trashnet dataset with data augmentation techniques and the WaDaBa dataset with physical variation approaches in their research. Specifically, their method uses the joint utilization of the datasets, enabling the learning of disjoint label combinations. Experiments have demonstrated its efficacy in waste classification. Additionally, another research study expands on the techniques developed in the work mentioned above and proposes an autonomous, intelligent robotic system for categorizing and separating recyclable materials, aiming to contribute to increasing the recycling rates in Greece. Specifically, ref. [28] introduces a twoclassifier incremental learning scheme with the first model trained on RGB waste images and the second trained on near-infrared spectrum waste images. Another similar study [12] found that the D-LinkNet architecture, first introduced for road segmentation, can outperform other implementations for classifying power line structures. In detail, D-LinkNet adopts an encoder-decoder structure, a dilated convolution on a pre-trained encoder. With memory-resource-limited limitations, class-incremental learning (CIL) typically encounters the "catastrophic forgetting" issue when updating the joint classification model in response to adding new classes. To address the forgetting problem, numerous CIL approaches transfer the knowledge of old classes by storing certain representative samples in a memory buffer with limited capacity [13]. To better use the memory buffer, ref. [13] proposes storing more auxiliary low-fidelity exemplar samples, as opposed to the original ones. This memory-efficient exemplar preservation approach makes the transmission of oldclass information more efficient. However, the low-fidelity exemplar samples are frequently dispersed in a different domain than the original exemplar samples; this phenomenon is known as a domain shift. To overcome this limitation, ref. [13] presents a duplet learning approach that aims to build domain-compatible feature extractors and classifiers, therefore significantly reducing the aforementioned domain gap. Consequently, these low-fidelity auxiliary exemplar samples can partially replace the original exemplar samples at a reduced memory cost. In addition, they provide a robust classifier adaptation strategy that refines the biased classifier (trained using samples including distillation label knowledge about old classes) using samples with pure, actual class labels. Opinion Aggregation Techniques on the Final Level In Hinata and Takahashi's work [29], a complex network technique referred to as EnsNet is developed for classification purposes. EnsNet comprises a primary CNN and several fully connected sub-networks. In the final convolutional layer, all sub-networks of the basic CNN construct a feature map, and a majority vote determines the final class selection. In the final layer, all sub-networks participate in an ensemble despite training independently. The set of feature maps created by the last convolutional layer of the base CNN is partitioned along channels into disjoint subsets, with each subset being passed into one of the sub-networks. Each sub-network consists of a fully interconnected neural network with several weight layers. Experiments have demonstrated that that is the model with the lowest error rate compared to other cutting-edge models. In another study [14], the authors propose a generic classification architecture of independent parallel CNNs that explicitly exploits a "mutual exclusivity" or the "classifiers' mutually supported decisions" property underlying many dataset domains of interest, namely that in many instances, an image in a given dataset may belong to only one class. The proposed system consists of numerous opinion aggregation decision rules that are activated when the mutual exclusivity property is or is not satisfied, as well as weights that intuitively reflect the confidence that each CNN identifies its related class. Thus, this approach can: (a) take advantage of obviously identified class characteristics when they exist and (b) confidently assign objects to classes even when class boundaries are unclear. In the past, there have been excessive studies to determine which method is better comparing different classification algorithms in the binarization strategies domain. In this paper [10], researchers are interested in ensemble methods based on binarization techniques; more specifically, the authors focus on the well-known one-vs.-one and one-vs.rest decomposition strategies, paying particular attention to the final step of ensembles and the combination of the outputs of the binary classifiers. To combine these outcomes, they constructed an empirical analysis of different aggregations. Several well-known algorithms from the literature, such as support vector machines, decision trees, instance-based learning, and rule-based systems, are utilized in the experimental study. The results demonstrated the usefulness of binarization strategies in relation to the baseline classifiers, as well as the dependence of the aggregation strategy on the baseline classifier. Mixture of Expert Models In [15], researchers explore numerous types of spectrograms in order to highlight various genre-specific properties for the MGC challenge. This research presents a mixture of expert (MoE) system to exploit all of these characteristics jointly. It is possible to derive a set of MGC models using different spectrogram-based statistics. Then, each model is treated as its expert. Consequently, a neural mixture model is introduced to collect and synthesize the expert models' predictions and then generate a final prediction for a specific piece of music. They propose a neural-based MoE system, which can dynamically decide the weighting factor for each expert system to improve the performance of the MGC task. In another similar domain, with minimal data available for the challenge of finegrained recognition, it is impossible to develop diverse expertise by dividing the data. In [16], they combined an expert gradually enhanced learning technique with a Kullback-Leibler divergence-based constraint; they increased the variety among experts to address the challenge. The technique sequentially learns and adds new experts to the model based on the prior knowledge of previous experts. In contrast, the added constraint compels the experts to provide various prediction distributions. This compels the experts to understand the work from various angles, enabling them to develop expertise in a variety of subspace problems. Experiments indicate that the resulting model increases classification performance and reaches state-of-the-art performance on many fine-grained benchmark datasets. Proposed Meta-Learning Techniques The primary objective of this study is to test and evaluate several neural-networkbased meta-learning strategies for solving the multiclass problem by combining ensemble learning, mixture of experts, one-vs.-rest decomposition, Bayes rule, and opinion aggregation approaches. By doing that, we try to address traditional multiclass methods' drawbacks. At first, with the Bayes-based approach, we try to address the need to train either on different datasets on similar object types, but with diverse specifications or different acquisition protocols, or train on the same hierarchical dataset, but with different labels. Furthermore, our methods substantially enhance the database with unbalanced classes using another dataset of specific classes. In addition, we try to enhance the prediction power of one class that is weak in multiclass classification by using multiclass decomposition methods. In the following, we address techniques that are either part of the overall family of meta-learning techniques or a combination of several techniques. In particular, we focus on the image classification proble;m thus, "Base Learners" appearing in Figure 2 will represent "Classifiers" for the following proposed methods. Ensemble Approach Based on Bayes Theorem The first concept implemented is based on Bayes' theorem. Specifically, this approach is an ensemble learning technique to enhance the prediction accuracy of an image classification task by utilizing the additional information gained by individual sub-class classifiers. The main idea is that useful information could be hidden in the relations formulated by the multiclass and the sub-class classifiers. In detail, this technique can be applied when there is a classification problem with n different classes C i ∈ {C 1 , . . . , C n } and n individual experts sub-class classifiers who can further divide each class C i into new sub-classes Sc i ∈ {Sc 1 , . . . , Sc m }, where m represents the number of sub-classes for the class C i . Each sub-class classifier is implemented as a multiclass classifier which further divides class C i into m new sub-classes. The final classification probability P Bayes (C i ) is calculated for each initial class C i based on the Bayes theorem. As we will explain later in detail, P Bayes (C i ) is calculated by combining the posterior probability obtained as an independent observation from the expert sub-class classifiers and the output of the baseline multiclass classifier. At first, we process the information gained by each expert sub-class classifier. In detail, each sub-class classifier Sc i (which is an expert for class C i ) returns a list of numbers in [0,1] corresponding to the probabilities that an image belongs to any one of Sc i 's sub-classes. Let P Sc i (j|Ci) denote one such probability (namely the probability assigned by the classifier that the image belongs to its sub-class j). We pick the highest such P Sc i (j|Ci) probability, and term its corresponding sub-class as M i , as shown in Equation (2). Intuitively, M i is the sub-class of C i "favored" by the sub-class classifier Sc i as the most probable for the input image to belong to. Notice that M i is an observation corresponding to the likelihood L(C i |M i ), which means that the higher the M i is, the higher the probability that an image belongs to class C i . We are now ready to use Equation (4) below to calculate what we term the "Bayes probability". Equation (4) has a similar form to the Bayes rule (posterior ∝ likelihood · prior). There, the output P multi (C i ) of the baseline multiclass classifier denoting the probability assigned by that classifier to the image belonging in class C i , plays the role of the prior, while P(M i |C i ) returned for M i by the Sc i sub-class classifier (as explained above) plays the role of the likelihood, and P Bayes (C i ) is the posterior corresponding to the probability calculated by the middle level of our Bayes-theorem-based ensemble that the image belongs in class C i . After calculating the class probabilities P Bayes (C i ) for each class C i , these are used as inputs to a generalizer, which outputs a final prediction P(C i ) denoting the probability that an image belongs to each class C i . Essentially, this generalizer is acting as a decision maker, which is trained to classify images into the n classes from the initial classification task, given only the P Bayes probabilities as shown in Figure 3. The novelty of this approach lies in two parts. First, on using multiple unrelated multicenter datasets to train the expert sub-class classifiers, and second on how it calculates the final generalizer level inputs using the baseline multiclass classifier and the Bayes theorem. For the training of the generalizer, every image of the initial training dataset is fed to every classifier (sub-class classifiers and multiclass) to produce the P Bayes (C i ) values for every class C i . Finally, we calculate the training loss by comparing the finally predicted argmax(P(C i )), where i ∈ {1, · · · , n}, with the ground-truth class C. However, this approach has one major drawback that limits it from being used by many image classification applications. Specifically, only a part of image classification problems have sub-class datasets available, which means that the individual datasets should be created from scratch or explored in detail to derive sub-classes beforehand; otherwise, this approach will not be applicable. Furthermore, another case where our Bayes-rule-based approach can be applied efficiently is the hierarchical (multi-label) datasets, where the additional labels can be used to define the sub-classes. Mixture of Experts: Case of One-vs.-Rest Classifiers The mixture of experts (MoE) is a popular and challenging combination strategy for improving machine learning performance. Mixture of experts is based on the divide-andconquer approach, which divides the problem space among a few neural network experts and is regulated by a gating network. There exist different ways of partitioning the problem space between the experts. In brief, it entails breaking down predictive modeling tasks into sub-tasks, training an expert model on each, constructing a gating model that learns which expert to rely on based on the predicted input, and aggregating the predictions. Although the technique was first published with neural network experts and gating models in mind, it can be applied to any form of a model. As a result, it addresses stacked generalization and belongs to the meta-learning class of ensemble learning approaches [30]. In detail, this technique can be applied in any multiclass classification problem with n different classes C i ∈ {C 1 , . . . , C n }. Its difference from the previous approach lies in calculating the final probability for the classes. Instead of relying solely on a neural network design in a conventional manner, we also employ a one-vs.-rest strategy. It requires partitioning the multiclass dataset into n binary classification problems, one for each distinct class. Then, a binary classifier is trained on each binary classification problem, and predictions are generated by combining the single binary class classifier with the conventional multiclass method. The novelty of this proposed approach lies in how we calculate the final individual weights for each expert based on the output of the multiclass classifier. The final classification probability for each initial class C i is calculated using a combination based on the multiplication of: (a) the probability which is produced by the single-binary class classifiers for each individual binary problem and (b) the outputs of a baseline model (multi-classifier) acting as a gating function that is trained to solve the initial problem. In this mixture of experts approach, we use the initial data to create n new training datasets containing all the images of the initial dataset, depending on the class C i the images belong to. Specifically, each new dataset has two classes, the first resembling the case when an image belongs to class i and the second when an image does not belong in that class. In contrast with the previous Bayes-theorem-based approach, where the training datasets could have been different than the initial training dataset. Specifically, we obtain from the baseline multiclass classifier the class probability P multi (C i ) and the probability P yes i (C i ) from each single-binary classifier, as shown in Figure 4. Each expert is a single binary expert classifier P yes i (C i ) + P no i (C i ) = 1, where P yes i is the probability that an image belongs to the class C i and P no i is the probability that an image does not belong to the class C i according to the single binary expert classifier i. After calculating the class probabilities P x ∈ P(X = C 1 ), .., P(X = C n ) for each class using the formula in Equation (5), we then find the final classification result by searching for the probability P(C i ) with the highest value, as shown in Equation (6). Then, using the index, we find the predicted class C index . Proposed Combination Strategies In this section, we put forward two stack generalization variants that follow the one-vs.-rest architectural paradigm. Stacked generalization is a method for lowering the generalization error rate of distinct generalizers. Stacked generalization operates by inferring the biases of the generalizer(s) relative to a given learning set. This deduction continues by generalizing in a second level whose inputs are (for instance) the opinions of the base learners when taught with a portion of the learning set and attempting to predict the remainder and whose output is (for instance) the correct guess. When employed with numerous generalizers, stacked generalization can be viewed as a more complex variant of cross-validation, as it employs a more sophisticated mechanism for merging the various generalizers than cross-validation's simple winner-takes-all approach. In stacking, an algorithm takes the outputs of sub-models as input and tends to learn the optimal way to combine the input predictions to obtain a more accurate output prediction. It may be useful to view the stacking process as having two levels, as depicted in Figure 2: level 0 and level 1. 1. Level 0: The training data are the inputs to the level 0 learners. Then, the level 0 base learners learn to make predictions for either the initial task or a specified sub-problem, depending on the training data. 2. Level 1: The predictions produced by the level 0 learners are used as input to train the single level 1 meta-learner, with the intention to learn from this data to generate the final predictions. Using a neural network as a meta-learner is usually advantageous when there are multiple base learners. Specifically, sub-networks can be embedded within a larger multiheaded neural network, which then learns how to combine the predictions from each input base learner optimally. It allows the stacking ensemble to be considered as a single large model. This approach has the advantage that the outputs of the base learners are provided directly to the meta-learner. Moreover, if desired, it is also possible to update the weights of the base learners in conjunction with the meta-learner model. As commonly used in the literature, "one-vs.-rest" refers to a generic classification paradigm that employs binary methods for multiclass classification. It involves approaching the multiclass problem via the prism of multiple "binary" classifications. Then, a dedicated "binary" classifier (which may use a sigmoid function for classification into the most probable of the two classes) is trained on the original dataset and solves a "binary" classification problem. The final predictions are typically made using the most confident classifier or a simple, usually absolute, majority rule. The machine-learning algorithms established for binary classification can be extended to handle multiclass classification issues, which is one of the advantages of this technology. The downside is that the classifiers become unbalanced when the training data contain a disproportionate number of negative examples compared to positive examples. In our approaches, we exploit the benefits of such binary classifiers at the base level while enhancing their prediction power using "knowledge" at the layer of meta-learner. Specifically, we combine properties of the stacked generalization technique schemes to provide novel improvements in the multiclass problem while addressing the diversity of classes and datasets. More specifically, in stack ensemble learning, the meta-learner can and should convey knowledge about the prediction power of base classifiers or the peculiarities of the different training datasets underlying the training of each base classifier. This is precisely the aim of our proposed schemes, which are introduced and discussed in the following. Stack Generalization (IP-Networks Only) The first one-vs.-rest method we propose uses only the trained experts to solve the initial multiclass problem. Specifically, the original problem has been decomposed into n separate binary problems following the one-vs.-rest scheme, implemented with n separate binary classifiers, also mentioned in this work as "experts" or "Independent Parallel" classifiers (IP), as shown in Figure 5. In this stack generalization variation, we create n new binary training datasets to train the single binary classifiers. The training data are fed to each expert with the entire image dataset and output two probabilities-one probability for the positive and one for the negative outcome. Then, the outputs of the individual experts are used as inputs to train a generalizer that learns to solve the initial multiclass problem. The final prediction produced by the generalizer is in the form of a list of probabilities for each class, summing to one (softmax layer). This method's performance depends on the individual experts' prediction accuracy. However, as it will be shown later, stacked generalization methods manage to improve the base multiclass performance in every case regardless of the accuracy of the experts. Stack Generalization (IP-Networks + Multiclass CNN ) Like the previous method, the initial problem is divided into n smaller problems using binary classifiers, where n is the number of the original classes. The only technical difference with the previous method is that the generalizer uses extra information from the output of a baseline multiclass classifier, as shown in Figure 6. This diagram is unrelated to mixture of experts in Figure 4 in two ways: first, in the manner in which the output of the multiclass classifier is employed, and second, in the presence or absence of a meta-learner (generalizer). In Figure 4, the output of the multiclass classifier serves as a gating function to regulate the participation of the single binary expert classifiers, whereas in this case, it is used as additional information as input to the metalearner. Regarding the use of the meta-learner in Figure 6, it is an integral component of this method to produce the final class by combining the outputs of the independent classifiers and those of the multiclass. In general, the addition of extra features in a classification task is usually advantageous and can significantly help to capture other aspects of the problem that previously were not completely observed. For instance, the IP-networks-only method cannot fully comprehend the broader picture and, more precisely, the relations between the classes other than the one the expert is trained in. By contrast, a baseline multiclass model is mainly trained to understand these inter-class relationships. Hence, adding the baseline multiclass model, along with the IP experts and the meta-learner (generalizer), is expected to improve the classification accuracy of the initial task significantly. Experimental Evaluation This section demonstrates the experimental process we used to evaluate the proposed methods' performance. Initially, we discuss the experimental setup we used in three parts, the first being about the datasets we experimented with, the second the configuration, and the last the evaluation metrics. Right after, we present a detailed analysis by comparing the results of each proposed method along a baseline approach. Experimental Setup Initially, the experimental framework's design and the configurations used are demonstrated. We start by presenting the specific datasets used to test the newly suggested methods, then we discuss the design of every test case, and finally we explain the reasons behind selecting the evaluation metrics. The main goal of this experimental framework is to provide precise and comprehensive results regarding the classification performance of the methods mentioned above. Datasets Three datasets were used for the performance evaluation of this study. In every case, a stratified randomly divided 70% training set and 30% validation set split were applied as shown in Table 1. The datasets were chosen to demonstrate the adaptability of the various techniques to a wide range of situations using a variety of use cases. First, we evaluate two distinct subsets of the herbarium dataset with respect to the number of samples in each class and the balance of the datasets. This scalable increase in the number of classes in a hierarchically balanced dataset demonstrates the general case in the multiclass classification domain. In the case of UTKFACE, we deal with a highly imbalanced dataset with real-time applications in cutting-edge applications such as facial recognition. Finally, we test the case of building a new enhanced dataset by combining photos from two unrelated datasets to form a standard image database to evaluate the proposed methods' performance in such conditions. The first dataset we used in this work was the herbarium 2022 [31]. Herbarium 2022 is a hierarchical dataset that classifies around 80,000 images of herbs into a three-label (taxonomy) system. The first distinction level is the family, the next level is the genus, and the last is the species. In detail, herbarium contains around 250 families, 2500 genera, and 15,501 unique herbarium specimens. The initial problem linked with this dataset is to test the classification performance on a test dataset (with different images from similar plants as the training set) based on the unique ID of each plant species. As it can be understood, this problem's size and complexity can make it especially hard to solve and even moreso when there is already the problem of class imbalance (each unique species can have only 5-80 training images). However, there are not many hierarchical datasets available, so for this research, two batches of herbarium families that make up the first two experimental datasets were distinguished. Specifically, the first dataset was formed using four families, and the second by using eight families and their related genera (sub-classes) to compose the first two experimental datasets. The second dataset used in this work was the UTKFACE [32] Large scale face dataset. This dataset is a large-scale, multi-decade face dataset with an age range from 0 to 116 years old. In addition, the dataset contains approximately 20,000 face images with age, gender, and ethnicity annotations. The images exhibit a wide range of poses, facial expressions, illumination, occlusion, and resolution. This dataset may be utilized for various applications, including face identification, age estimates, age progression/regression, and landmark localization. The scope of this particular dataset is for image classification. For this study, the race categories, divided into the following five categories: (a) White, (b) Black, (c) Asian, (d) Indian, (e) Others (such as Hispanic, Latino, Middle Eastern) were treated as the level 0 classes, and for each class, the corresponding "binary" gender categorization was used as level 1. The third dataset proposed in this experimental setup tackles the important "recyclable waste classification" problem. This dataset was created by images extracted from a selected subset of TrashBox [33] in combination with images from a subset of the Trashnet [34] dataset. TrashBox is a hierarchical dataset that has two labels, with the first one showing the material (e.g., plastic, paper, metal) and the second showing the type of item (e.g., newspaper, beverage can, . . . ). In contrast, Trashnet is a simple multiclass dataset having six distinct classes with images of items from different materials (e.g., plastic, paper, metal, cardboard, . . . ). In this new particular dataset, there are three classes at level 0 (plastic, paper, metal) with the following sub-classes for each at level 1: • Plastic: divided into plastic bags, plastic bottles, plastic containers, plastic cups, and cigarette butts. • Paper: with sub-classes newspaper, paper cups, simple paper, tetrapak, and cardboard. • Metal: with sub-classes beverage cans, construction scrap, metal containers, other metal, and spray cans. The final blended dataset has around 10,000 unique images with two levels of annotations. Configuration and Specifications As known in the machine learning community, it is not an easy task to determine when a specific classification approach performs well just by looking at the metrics of the specific implementation alone. The simplest solution would be to create a baseline approach and compare it to the new implementations, while ensuring a set of rules is applied. In this study, the ResNet50 [35] model was widely used to train all the classifiers of all mentioned methods, while using the same training parameters (learning rate, batch size) to ensure a fair comparison between the methods. The learning rate parameter is one of the factors that determines the convergence speed that directly affects the training performance. The learning rate was set to 0.01 with Adam's policy [36]. Additionally, a baseline ResNet50 multiclass classifier was trained for every dataset, using the same training parameters as the rest of the proposed methods to have a solid starting point to measure the performance improvement of every approach. Thus, ResNet50 networks were used as base classifiers, while the generalizers had a different, more straightforward form. Furthermore, all generalizers had the same architecture ( Figure 7) and were designed to be as simple and lightweight as possible. Specifically, it comprised two fully connected layers with 256 nodes and a relu activation each, while having a batch normalization layer in between, followed by a dropout layer, and finally one more fully connected layer with softmax as an activation function, with a number of nodes equal to the number of classes to classify. Evaluation Metrics The selection of the appropriate evaluation metrics can be very challenging sometimes. Therefore in this study, four of the most used metrics were selected. The macro average [37] of the top four performance metrics will be compared to determine how effective the proposed methods are. The most common evaluation metric is the overall accuracy of a multiclass classifier. The average multiclass classifier accuracy (Equation (7)) is calculated as the division of the number of accurate predictions and the total number of predictions. Accuracy = Number o f correct predictions Total number o f predictions However, the information provided by the average accuracy can be misleading, especially in problems with more classes; hence other metrics should be considered too to determine whether there are significant improvements in the overall performance of a classifier. For this reason, we also measured the effectiveness of our proposed methods in terms of precision, recall, and F1 score. In a multiclass classification problem with n different classes C i ∈ {C 1 , . . . , C n }, we define as tp i the number of true positive predictions for C i , i.e., the number of times the multiclass classifier correctly predicted the positive class C i . Additionally, f p i is the number of false positives for C i , defined as the outcome where the model incorrectly predicts class C i . By contrast, f n i are the false negative predictions for C i . Equation (8) shows the precision of a classifier which represents the proportion of positive predictions that were actually correct. Since the problem is multiclass, the averages of the precision tp i tp i + f p i of every class C i are considered. Respectively, the recall metric, displayed in Equation (9), shows the proportion of actual positives that were identified correctly. Here again, to calculate the average recall, the individual recall of every class C i is assessed. Another helpful metric is the F1 score (Equation (10)) because it combines a multiclass classifier's precision and recall. By using these metrics, we can better determine when one of the developed methods improves the overall performance or just a specific portion of the problem, while not overlooking the main classification task. Results and Analysis In this section, a detailed analysis of the experimental results of the proposed methods is presented. These methods were applied and tested using four different paradigms with varying classification difficulty. Initially, the specifications of every architecture will be discussed according to Table 2. The first column displays the total trainable parameter count of the proposed approaches in millions, while the second column demonstrates the floating point operations (FLOPs) required to complete an inference step in billions. The n parameter denotes the number of classes of the original multiclass problem. The parameter count and the FLOPs are proportional to the number of classes n because in our decomposition-based approaches, every new class requires an individual binary or sub-class classifier. The third column specifies the time in milliseconds (ms) needed for a single inference step to be conducted. To obtain robust insights about the time per inference step, we run 30 repetitions of the same 200 images for all architectures in a PC equipped with an Intel i7-9700 CPU, 32 GB of RAM, and an NVIDIA RTX 2080 SUPER 8GB GPU. The last column presents the depth of every neural network, counting only the layers with trainable parameters. As expected, the baseline multiclass architecture, in our case ResNet50, was the most lightweight in terms of parameter count and time per inference step. In particular, the time per inference step presented was generated from running the different methods in the fiveclass UTKFACE dataset. Interestingly, the baseline multiclass classifier was about six times faster than all the other methods except the IP-only method, which was only close to five times faster. By contrast, the depth of the architectures remained similar since the methods we propose make the overall architecture grow wider and not deeper, which can also be a practical feature because many operations can be performed in parallel, hence reducing the inference time step when run on a high-end machine. Furthermore, the number of parameters and the floating point operations of the proposed methods increase linearly with respect to the number of classes n of the initial multiclass classification problem. Thus, a multiclass classification problem with 1000 classes, such as ImageNet [38], would require considerable storage space; however, the accuracy improvement that can be accomplished in ImageNet using our proposed methods has not yet been tested. Even though the inference time increased, multiclass problems using the proposed architectures can still be solved in real-time. Therefore, this slight increase in inference time is only an insignificant trade-off-for multiclass problems with a small number of classes-compared to the classification accuracy improvement that can be achieved, which will be demonstrated in the following tables. First, the results from the four-class herbarium dataset are shown. As depicted in Table 3, the classification task is not that complicated; hence, the overall baseline performance is quite high, having 95.32% accuracy, so the potential performance improvement gap cannot be as significant. However, every proposed method managed to increase every metric that is monitored when compared to the baseline. In detail, the Bayes-rule-based method achieved +0.14% accuracy improvement, the mixture of experts showed a +0.82% improvement, the modified stacked generalization (IP-only) by +0.94%, and the modified stacked generalization (IP with multiclass) managed to increase the classification accuracy by +1.09%. The proposed implementations were tested on datasets of varying improvement gaps to produce solid results. The four-class herbarium dataset resembles a more straightforward scenario where the classes and the subcategories of each class are well balanced. In contrast, the eight-class herbarium resembles a more complex scenario, where the first dataset contains images from four additional classes, eight in total, and the subcategories are slightly unbalanced. Furthermore, the first dataset has class peculiarities (especially in the eight-class formulation) that render the multiclass distinction and classification quite challenging. Here, our proposed methodologies show their advantages in prediction power. Specifically, as shown in Table 4, the baseline accuracy was 80.59% with the Bayes method having a +1.09% improvement, the mixture of experts method having +3.12%, the stacked ensemble (IP-only) method by +3.37%, and the IP with a multiclass method having the most remarkable improvement of +5.89%. In this case, too, all the proposed methods achieved better overall results than the baseline, showing that the proposed methods can be even more helpful in more complex classification tasks. The last two experiments were based on well-designed datasets, which exemplify the properties of different classes. For this reason, all classifiers' performance was high and quite comparable. UTKFACE was the third dataset tested, and it resembles an even more challenging classification scenario, where the optimal solution is not that clear. In detail (Table 5), the baseline implementation achieved only 72.77% accuracy, which means that the improvement gap could be quite large. In this case, the Bayes method managed to increase the classification accuracy by +2.09%, the IP-only method increased it by +3.28%, and the IP with the multiclass method by +4.64%. Finally, the mixture of experts achieved the most significant improvement of +5.87%. This experimental scenario proves that the most "stacked" model is not always the best. At the same time, a more elegant approach, such as mixture of experts without any meta-learners, can be better in specific use cases. The last dataset tested represents an easier classification task, with the baseline achieving 94.14% accuracy, as depicted in Table 6. Additionally, The Bayes-theorem-inspired method achieved only a +0.2% accuracy improvement, the MoE method achieved a +0.94% improvement, and the stacked generalization (IP-only) a +0.16% increase. In contrast, the stacked generalization (IP with multiclass) improved significantly by +1.02%. Overall in almost all four dataset cases, the Bayes method achieved the slightest performance improvement, varying from +0.12% in the stiffest case up to +2.09% in the UTKFACE test case, where the improvement gap was the greatest. The mixture of experts approach was surprisingly resourceful in the classification task of the UTKFACE dataset, achieving the most remarkable increase across all monitored metrics. However, this method performed exceptionally well in the other test cases too. The IP-only method outperformed the Bayes-theorem-inspired method in three out of four test cases and the MoE only in two out of four test cases. While this method never outperformed the most complex stacked generalization method, it proved to be a stable middle-ground between the two most well-performing approaches. Finally, the stacked generalization (IP with multiclass) method had the best metrics in every case except in the UTKFACE dataset, where it was the second-best approach. Overall, this method had the most stable performance, proving that in most cases, the more complex the stacked model, the more remarkable improvement it can achieve. Conclusions and Future Work In conclusion, we proposed four new classification approaches in this study and evaluated them on four distinct datasets representing use cases of varying difficulty. Specifically, we presented a series of novel approaches for combining the output of a decomposed multiclass (image) classification task under the umbrella of the one-vs.-rest framework for the opinion aggregation module. The first was a novel opinion aggregation mechanism that combines information derived from sub-class classifiers based on the Bayes theorem. The second was a novel design for the mixture of expert approaches that incorporates the knowledge of a multiclass classifier as a gating model. Lastly, we put forward two stack generalization variants with novel characteristics that follow the one-vs.-rest architectural paradigm. In the end, all methods improved all monitored evaluation metrics when compared to the baseline multiclass classifier. Our goal in this study was not only to design an improved classifier but also to explore methodologies that can diversify small class differences to be able to derive specific sub-classes hidden in the data. The performance of the newly proposed mixture of experts method, which does not employ any additional meta-learners (generalizers), was highly remarkable. In addition, the proposed Bayes-theorem-based approach demonstrated the quite remarkable fact that information gained by a second label, such as in the herbarium and UTKFACE cases, or by an entirely new dataset, such as in the waste classification problem, can provide researchers with new classification "features" to utilize. Specifically, in a future study, the impact of adding these new features to any classification task could be evaluated and quantified, thereby providing researchers with more information about the limitations of this approach. Furthermore, even though the performance of our stacked generalization methods with the generalizers was anticipated to be rewarding, this shows that traditional meta-learning is still a powerful tool to be considered in multiclass classification tasks. Conflicts of Interest: The authors declare no conflict of interest.
2022-12-22T16:14:29.105Z
2022-12-20T00:00:00.000
{ "year": 2022, "sha1": "cb187989eac6b40bb5ab71d1ac4ec3b780d1f0a1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/1/9/pdf?version=1672202200", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ea8b08c6aeb84a27a4ca3ea005b7eae19a9f22e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
109924387
pes2o/s2orc
v3-fos-license
Bronchial Carcinoid Tumors with Massive Osseous Metaplasia: A Case Report and Review of the Literature Bronchial carcinoid tumors are primary lung neoplasms thought to originate from neuroendocrine cells, i.e. Kulchitsky cells, in the bronchial mucosa, although the type of cellular origin has not been clearly understood. A 61-year-old male patient underwent surgery and microscopic examination of the specimen revealed an anastomosing trabecular bony structure among the nests of tumor cells with round nucleus, granular chromatin, and large eosinophilic cytoplasm. Our case has been deemed worthy of being presented as bronchial carcinoid tumor with exaggerated osseous metaplasia. INTRODUCTION Bronchial carcinoid tumors are primary lung neoplasms thought to originate from neuroendocrine cells, i.e.Kulchitsky cells, in the bronchial mucosa, although the type of cell from which they originate has not been clearly understood.They account for more than 25% of all carcinoid tumors throughout the body, and less than 1% of all such lesions in the lung (1,2). According to the World Health Organization (WHO) classification in 2015, bronchial carcinoid tumors are classified into typical and atypical tumors based on their histopathological characteristics.Another classification according to the location is as central and peripheral tumors.Furthermore, there are reports that classify such tumors as well-differentiated neuroendocrine carcinoma and moderately-differentiated neuroendocrine carcinoma according to their histopathological characteristics and behavioral potential under the heading of neuroendocrine lung tumors (1,3). Bronchial carcinoid tumors may present as various morphologic variants.These variants include tumors with metaplastic cartilage and bone growth, tumors containing mucinous stroma, tumors with wide vascular structures, tumors presenting cystic changes, tumors presenting glandular pattern, and tumors containing amyloid-like / sclerotic stroma.Although a carcinoid tumor with metaplastic bone growth is identified as a variant, exaggerated osseous metaplasia is a rare finding in carcinoid tumors (4,5). CASE REPORT A 61-year-old male patient presented to an external medical center with the complaint of persistent cough in February 2017 and was referred to our hospital's clinic of pulmonary diseases for further examination and treatment with the evidence of a pulmonary mass detected during the thoracic computed tomography (CT) examination for etiology.The patient had no active complaints during the admission.The patient was an active smoker for 35 years.There was no specific finding in the patient's past and family history.The patient initially underwent a chest roentgenogram, followed by a thoracic CT examination.Thoracic CT examination revealed a mass lesion with the dimensions of 4x3 cm containing a significant calcific component in the right hilar region (Figure 1).Fiberoptic bronchoscopy (FOB) was planned for histopathological verification.FOB revealed a complete obstruction of the right middle lobe due to mucosal irregularity, and a punch biopsy was performed. A right lower lobectomy was planned since the biopsy specimen was identified as a neuroendocrine tumor. Macroscopical examination of the specimen from the right lower lobectomy revealed a solid, well-circumscribed tumoral lesion in the bronchial lumen and parenchyma, measuring 4x3x3 cm in size, with gray to white crosssectional areas, and containing locally hard-to-cut regions (Figure 2A). Histopathological examination of the lesion following decalcification and formalin fixation revealed an anastomosing trabecular bony structure among the nests of tumor cells with round nucleus, granular chromatin, and large eosinophilic cytoplasm (Figure 2B-D).No significant cytological atypia, necrosis or elevated mitotic activity (<2/10 HPF) were observed in tumor cells with normal nucleus-cytoplasm ratio.The cells showed strong positivity with immunohistochemical staining for NSE, CD56, synaptophysin and chromogranin (Figure 3A-D).Based on histopathological and immunohistochemical examinations, the case was diagnosed as carcinoid tumor with exaggerated osseous metaplasia.The patient has been under clinical follow-up for 17 months and is currently in complete remission following surgery. DISCUSSION Carcinoid tumors are considered as of neuroendocrine tumors, and were first identified by Langhans in 1867.Gosset and Masson reported in 1914 that these tumors, studied and characterized by various researchers over time, have endocrine properties (6). The incidence of bronchial carcinoid tumors ranges from 0.1 to 1.5 / 100,000.Such cases are detected at a younger age when compared to the primary lung malignancies (typically under 60 years), and may be seen in a wide range of ages.Bronchial carcinoid tumors are known to be more common in females than in males; however, some recent publications have reported that they have an equal rate of occurrence in females and males, and some even state they are 3.6 times more common in males (1,2,7). Risk factors for developing bronchial carcinoid tumors include a family history of carcinoid tumors and various genetic mutations.Although it is known that smoking does not play a role in the pathogenesis, a history of smoking is usually present in patients diagnosed with atypical carcinoid tumor (8,9).In our case, the patient had a history of smoking for 35 years and there were no carcinoid tumors in his family history.Patients with carcinoid tumors usually present with the complaints of dyspnea and hemoptysis since they are endobronchial growing masses.One third of the patients are incidentally detected without presenting any symptoms depending on the involvement of small airways.Paraneoplastic syndromes such as Cushing syndrome and acromegaly can also occur in the cases (10).Our patient presented with the complaint of a long-lasting and persistent cough. On radiological examinations, such tumors are seen as hilar or perihilar masses; however, they may be detected rarely as peripheral masses as well.While FOB is the preferred imaging modality for centralized tumors, CT is the preferred imaging modality for peripheral tumors.Bronchial carcinoid tumors have a distinctive FOB appearance as a result of their macroscopical features including polypoid form, smooth surface, red to bronze color, and endobronchial growth (11).The diagnosis was made by a biopsy taken during bronchoscopy performed following the detection of a hilar mass on CT. The WHO classifies carcinoid tumors into two categories according to their histopathological characteristics as typical and atypical carcinoid tumors.The main criteria used in such distinction are mitotic activity and necrosis.In typical carcinoid tumors, which account for 70-90% of all carcinoid tumors and tend to settle in the center, the number of mitoses is less than 2 in 10 HPF's, with no necrosis.The number of mitosis in atypical carcinoid tumors is between 3 and 9 in 10 HPF's, and focal necrosis can be detected in such tumors (1,12).No significant cytological atypia, necrosis or elevated mitotic activity (<2/10 HPF) was observed in our case. In bronchial carcinoid tumors, calcification is present in up to 30% of cases, whereas this number is 10% for ossification.While calcification requires precipitation of calcium salts in sites where prolonged tissue damage occurs due to a number of factors, ossification is a complex process involving osteoblasts and various inducing agents.In carcinoid tumor cells, it is thought that both osteocalcin, defined as an osteogenic differentiation marker, and secretion of bone morphogenetic protein (BMP) that induces differentiation of pluripotent cells into osteoblastic cells have a major role in the ossification in these tumors.Although there are publications reporting that intratumoral ossification in different tumors may be an indicator of metastatic potential, it is currently not possible to establish a relationship between intratumoral ossification and metastatic potential due to the insufficient number of cases of carcinoid tumors (13)(14)(15)(16)(17).In our case, there was no evidence suggesting metastasis at the time of diagnosis and after 17 months of follow-up. When the literature is reviewed, it is noteworthy that cases of carcinoid tumors with exaggerated osseous metaplasia are very rare.The first case was published in 1962, and 23 cases have been reported as case reports so far with no large series (13)(14)(15)(18)(19)(20)(21)(22)(23)(24).In their first relevant series comprising 22 cases published by Cooney et al. in 1979, ossification was found in seven cases, five of which were atypical carcinoid in nature (25).In the series of 63 cases published by Ha et al. in 2013, ossification was found in six cases, all of which were typical carcinoid in nature (26). We conclude that even the typical carcinoid tumors may present with exaggerated osseous metaplasia, although extremely rare.Complete resection of such tumors will lead to cure without any recurrences or metastasis. Figure 1 : Figure 1: Contrast-enhanced thoracic CT scan; mediastinal window.showing a mass of 37x30 mm in diameter, which results in a complete obstruction in the right middle lobe bronchus, leading to an obstructive atelectasis of the middle lobe. Figure 2 :D Figure 2: A) Macroscopical appearance of a solid, well-circumscribed tumoral lesion in the bronchial lumen and parenchyma, with gray to white cross-sectional areas.B) Nests of tumor cells revealing round nucleus, granular chromatin and large eosinophilic cytoplasm (H&E; x400).C-D) Nests of tumor cells among anastomosing bone trabeculae (H&E; x100).
2019-04-12T13:29:37.855Z
2019-04-12T00:00:00.000
{ "year": 2020, "sha1": "b1af4e008098600138a064d1ff3967b8866dbfda", "oa_license": "CCBY", "oa_url": "http://www.turkjpath.org/pdf.php3?id=1894", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fcce3fbf49a112e712fc00203bb8695e439e788", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11111908
pes2o/s2orc
v3-fos-license
Coulomb effects in artificial molecules We study the capacitance spectra of artificial molecules consisting of two and three coupled quantum dots from an extended Hubbard Hamiltonian model that takes into account quantum confinement, intra- and inter-dot Coulomb interaction and tunneling coupling between all single particle states in nearest neighbor dots. We find that, for weak coupling, the interdot Coulomb interaction dominates the formation of a collective molecular state. We also calculate the effects of correlations on the tunneling probability through the evaluation of the spectral weights, and corroborate the importance of selection rules for understanding experimental conductance spectra. Introduction An artificial atom or quantum dot is a fabricated nanostructure in which electrons have been confined in all three dimensions, leading to a discrete energy spectrum of the electronic states. The electrostatic charging energy U = e 2 /C (C is the total capacitance) is about an order of magnitude larger than the average level spacing ∆ǫ, leading to interesting new phenomena due to the interplay between energy and charge quantization [1,2]. If the artificial atom is coupled to electron reservoirs, one can study transport properties through the quantum dot. At low bias voltage and low temperatures (kT << ∆ǫ), periodic oscillations of the conductance as a function of gate voltage have been observed [1] and explained in terms of the Coulomb blockade effect [3,4]. In this linear regime, the difference in gate voltage between two successive conductance peaks corresponds to the difference in energy between the ground states for N − 1 and N electrons, and is called the addition energy. Transport spectroscopy through these systems has been carried out [5] by varying both the gate voltage and the bias voltage. In the non-linear regime, the bias voltage is finite and excited states are accessible, providing additional transport channels. In a seminal paper, Meir et al. [6] studied the temperature dependence of the conduction peaks by using an Anderson Hamiltonian with a Hubbard term for the intradot electron-electron interaction. Tunneling selection rules introduced by electronic correlations have been studied in the nonlinear regime by Weinmann et al. and by Pfannkuche and Ulloa [7], and the connection with the mechanism which appears to suppress many of the conductance peaks observed in experiments has been analyzed. Recently, experiments have been done in arrays of two and three coupled quantum dots (artificial molecules), where the interdot coupling can be controlled [8], and the unfolding of each conductance peak into two and three peaks, respectively, is clearly observed as the coupling is increased. Transport experiments on artificial crystals show evidence for energy band formation [9]. The electron-electron interaction manifested in artificial atoms in the Coulomb blockade regime is expected to be equally important in arrays of quantum dots in the tunneling regime. Stafford and Das Sarma [10] studied the addition spectra of arrays of quantum dots to analyze the analog of the Mott metal-insulator transition and to explain experimental results [9] in 1D arrays of quantum dots. Other studies have been carried out using this approach, looking at the conductance in linear arrays of up to six quantum dots [11]. In this paper we study arrays of quantum dots using an extended Hubbard Hamiltonian that takes explicitly into account the interdot Coulomb repulsion and its effects on the addition spectrum. Also, we study the interplay between electron correlations and interactions through the evaluation of the overlap or spectral weights governing the tunneling probability through these systems. Model In the extended Hubbard Hamiltonian, besides the intradot charging energy U , we consider a more general tunneling matrix element t αβ between single particle states in nearest-neighbor dots and the interdot Coulomb interaction V , invariably present in experiments. The Hamiltonian can be written asĤ = j,α ǫ j,αĈ † j,αĈ j,α − αβ,<i,j> (t αβĈ † i,αĈ j,β + h.c.) wheren j is the electron number operator at site j and < i, j > represents nearest neighbor dots. In this equation, ǫ j,α are the energy levels of the j th quantum dot which are assumed equally spaced with separation ∆ j , U j is the intradot Coulomb repulsion for the j th quantum dot, V ij is the interdot repulsion between the i th and j th dots and t αβ is the tunneling matrix element between α and β orbital states. Hereafter, energies will be expressed in terms of U 1 = U and V ij is assumed constant and equal to V . The tunneling matrix elements t αβ will be assumed to either be zero, or be given by t αβ = γ 2 t/((∆E) 2 + γ 2 ), where ∆E = ǫ i,α − ǫ i+1,β is the difference in energy between levels in each dot involved in the tunneling event, t is the maximum value of the tunneling matrix element and γ is the width of the Lorentzian (set here equal to 1/t), This form of t αβ simulates the expected decreasing coupling between levels that are not resonant. A specific microscopic model of the structure allows one to estimate the tunneling matrix elements and obtain this behavior. Details will be presented elsewhere. In all cases, we have taken the temperature such that kT = 0.04U , and U > t ∼ ∆ j >> kT for all j. The current through the system takes into account the fact that an electron entering or leaving the system causes a transition between the (N − 1)-electron state |α ′ > and the N -electron state |α >. The corresponding tunneling rates Γ αα ′ = Γ n S αα ′ depend on the single electron tunneling rates Γ n and the spectral weights S αα ′ describing the correlations in the system. This quantity is given by [7] where n labels single-electron dot states. For a system of uncorrelated electrons this overlap will be either one or zero between any two states; however, electron correlations result in overlaps much less than unity and consequently the tunneling probability is reduced considerably. Here we present results for linear arrays of two and three quantum dots with up to five spin degenerate levels per dot. For the two atom molecule we consider two cases: (a) the symmetric case, where the level spacing and intradot Coulomb interaction are the same in each site, i.e., ∆ 1 = ∆ 2 = ∆ and U 1 = U 2 = U ; (b) the asymmetric case, where these parameters are different in each site. We assume a parabolic potential for each dot, so that these two parameters characterize the size of each atom. For the three atom molecule, we consider linear arrays in the following two ways: (a) all atoms are equal and (b) a larger atom in the center, with appropriate parameters for each case. The procedure is to solve the extended Hubbard Hamiltonian in the particle number representation by direct diagonalization to obtain the eigenvalues and eigenvectors for the system with N electrons. From the eigenvalues we can obtain the addition spectrum from where Z is the grand canonical partition function and the differential capacitance is obtained from The spectral weights can be calculated from the eigenstates of N −1 and N electrons and equation (2). Results For two dots, in the symmetric case, with t αβ = 0, V = 0 ( Fig. 1(a)), we obtain peaks in the differential capacitance spectrum, characteristic of isolated dots, separated by the Coulomb blockade energy ∆ + U . The dotted line shows the behavior of < N >, the average number of electrons in the molecule where the degeneracy present in our system can be observed. Keeping the tunneling coupling equal to zero and increasing the interdot Coulomb repulsion V we observe ( Fig. 1(b)) that the Coulomb blockade is partially destroyed with an unfolding of the peaks, with separation equal to the value of V , in such a way that a doublet is obtained from each of the original peaks. By increasing the value of V further, we observe the evolution of the spectrum from the "atomic" case of isolated quantum dots, to a collective state of the system where both the intra-and interdot Coulomb interactions, U and V , are present, but it is V which is responsible for breaking the degeneracy and producing these collective states. In the asymmetric case, Fig. 1(c), we have a different situation where the peak at µ = 0 characteristic of isolated dots remains as a consequence of the alignment of the lowest energy levels for both dots, but the other two peaks unfold due only to the difference in size between the dots. Again we observe the formation of split peaks as V is increased (Fig. 1(d)). When t = 0.1U and V = 0, in the symmetric case, Fig. 2(a), the splitting of the peaks is proportional to the value of t. For weak coupling we see that the peaks have not completely unfolded; but, as V is increased (Fig. 2(b)) again we observe the formation of split off features. Comparing with Fig. 1(b), we note that although both plots look similar, there is an important difference. Here there are strong electron correlations not present in the situation corresponding to Fig. 1(b). Therefore, we may say that capacitance spectra do not yield information on effects of correlations. As we will see below, we may obtain this information through the calculation of spectral weights. In the asymmetric case, for V = 0 (Fig. 2(c)) again we see a splitting of the peaks which now depends on both the tunneling coupling and the asymmetry, and as the interdot interaction V is turned on, we observe the formation of a series of collective states of the system (Fig. 2(d)). Figures 3(a) and (b) show the spectra for molecules with three identical dots and three orbitals, with interdot tunneling (t = 0.1U ), for the cases V = 0 and V = 0.3U . Since we are restricted to interactions between nearest neighbors, we obtain results similar to the corresponding case for two dots, shown in Figs. 2(a) and 2(b). Considering an atom with a larger size at the center of the molecule (Figs. 3(c) and 3(d)), we see that the reduction in symmetry improves the formation of collective molecular states. As the number of dots increases we observe how each peak in the capacitance spectrum evolves into a "miniband" consisting of as many peaks as dots in the artificial molecule. One notes gaps between successive minibands which in general decrease as the coupling (t or V ) is increased. The capacitance spectrum exhibits a collective property or state which is a signature of the artificial molecule in a similar sense as the energy band structure defines a system in solid state physics. The capacitance peaks correspond to fluctuations in N , the number of electrons in the system, determined by changes in chemical potential µ as the gate voltage is varied in transport spectroscopy or capacitance experiments, and obey selection rules which depend on the electron correlations in the system. We see from these results that, for weak tunneling, the effect of interdot Coulomb interaction is very important in the formation of collective states in coupled quantum dot arrays. The general behavior we observe for the evolution of the differential capacitance peaks as a function of coupling, with interaction effects present, are in agreement with experimental results [8], regarding the position of the conductance peaks. In Figure 4, we show results for the spectral weights S αα ′ as a function of the energy difference ∆E between the states involved in the transition, corresponding to the case where the number of electrons in the system goes from N = 2 to N = 3 keeping the interdot Coulomb interaction V = 0.3U fixed. For t αβ = 0, Figure 4(a) shows that for two symmetric quantum dots with three orbitals, the electrons are totally uncorrelated. This can be understood since in our model Hamiltonian the interaction terms are diagonal. When t = 0.1U , we see from Fig. 4(b) for the same system, an overall reduction in the values of the spectral weights due to the presence of correlations, since in this case a state of the system is made up from a large number of single particle states with very different occupation probabilities. Figure 4(c) shows the spectral weights corresponding to the case in Fig. 3(c), and Fig. 4(d) for an asymmetric double dot system with five orbitals. We can see that a large number of overlaps with small values appears, implying that in this system most of the channels will have a small contribution in transport experiments. These are the effects of electron correlations caused by tunneling coupling which cannot be appreciated in the corresponding addition spectra (Fig. 1(b), 2(b)). If we calculate the average value of the overlap for these two cases we obtain 0.083 and 0.067, respectively Conclusions We investigate artificial molecules consisting of two and three quantum dots coupled in series. The dots can be either equal or with different sizes. We have applied an extended Hubbard Hamiltonian to calculate the capacitance spectra and spectral weights for these systems. We find that the peak positions depend on both inter-and intra-dot interactions and observe how characteristic peaks of isolated dots develop into minibands and a series of molecular "collective states" for N electrons is formed. We observe that the interdot Coulomb repulsion must be considered in the weak tunneling regime for an appropriate interpretation of peak splitting in conductance spectra experiments. We analyze the spectral weights for these artificial molecules to emphasize the interplay between interactions and electron correlations and the relation with the tunnel probabilities through these systems. Our results are in agreement with known selections rules for quantum dots.
2014-10-01T00:00:00.000Z
1996-06-23T00:00:00.000
{ "year": 1996, "sha1": "eed0bc1433423a1333a90b880972cb93d9e33c1a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9606163", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "eed0bc1433423a1333a90b880972cb93d9e33c1a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55422730
pes2o/s2orc
v3-fos-license
Assessment of the Homogeneous Approach to Predict Unsteady Flow Characteristics of Sheet and Cloud Cavitation In this work, the homogeneous approach, frequently used to simulate cavitation in hydraulic machinery, is used to compute unsteady cavitating flows for two simplified geometries. After a quick review of the literature and a rigorous presentation of the proposed methodology, the detailed computed physics of sheet and cloud cavitation are compared with experimental observations and with theory. Results suggest that the assumption of a homogeneous medium is not suitable to predict the fine physics of attached cavitation and thus to predict its precise unsteady characteristics. However, the inhomogeneous approach, in which a momentum equation is solved for both phases under a volume of fluid (VOF) approach, is shown to be more promising. Although it is numerically less stable, such an approach allows the effective body to be modified by the presence of vapor in contrast with the homogeneous approach. The resulting flow topology around the vapor cavity is found to better agree with the experimental observations, and thus the inhomogeneous approach offers the potential to better predict the unsteady characteristics of attached cavitation. Introduction In an era of emergence of new renewable energy technologies, hydraulic turbines become the corner stone of a complex energy market. As a quick, reliable source of renewable energy, they are operated more frequently in transient and off-design operating conditions to secure the network. As documented in Dörfler et al. [1] and further demonstrated by the works of Yakamoto [2] and Lewys [3], in offdesign operating conditions, cavitation may occur and play a leading role in the dynamics of the fluid flow inside the turbine runners. The state of the art in the simulation of cavitation relies on the assumption of a homogeneous medium which is simply characterized by a mixture volume density. The physics of vaporization and condensation are then governed by different cavitation models. The literature is rich in studies assessing the capacity of numerous cavitation models to predict steady and unsteady characteristics of cavitation with a variable degree of success. For example, the works of Arndt and Song [4], Coutier-Delgosha [5,6], Frikha [7], Ducoin [8] and Zwart [9] and many others have all proposed promising avenues in simulating sheet and cloud cavitation with the homogeneous approach under various assumptions for phase change. However, none of those works has focused on the fine flow physics associated with the homogeneous assumption close to the vapor cavity. As recalled by Brennen [10], this homogeneous assumption is only reasonable if one considers that the dispersed phase is formed of small bubbles, well mixed with the liquid and mainly transported by its convection. However, in the case of attached cavitation where a significant vapor cavity is present along the body, it is not clear if such an assumption is well suited. The objective of the work presented in this paper is thus to assess the capabilities of the homogeneous approach implemented in a widely used commercial solver to predict the unsteady characteristics of attached cavitation on two experimentally tested setups. The results are compared to those obtained from simulations using the inhomogeneous approach on a case of attached cavitation. Numerical Methodology To perform the computations, the commercial solver ANSYS CFX 14.5 [11,12] is used on 1 cell thick pseudo 2-D meshes of the flow fields. In this work, cavitation is modeled through a mass transfer ideology via the Rayleigh-Plesset model included in CFX, whereas the continuity equation states that: where subscripts and correspond respectively to the vapor and liquid phases and Γ is the mass transfer by unit volume which is being vaporized. It is calculated at each time step through a simplified Rayleigh-Plesset equation ( assuming a shared pressure field, adapted both for vaporization (̇+) and condensation (̇−): where, C vap , C cond , α nuc , R nuc , R b are constants that are defined in the solver's documentation [11]. In the inhomogeneous approach, interphase momentum transfer is accounted for via additional terms in the Navier-Stokes equations, shown in Equations 4 and 5: where is the effective viscosity ( = + t ), as expressed by the Boussinesq assumption and modeled in this work via a standard − SST turbulence model. In Equations (4) and (5), is the force acting on a phase due to the presence of the other (i.e. drag; = − ). By using a volume of fluid (VOF) approach to represent transport phenomena at the interface, one can use the interfacial area = | | to calculate the drag force exerted at the interface: where C D = 0.44, which corresponds to the drag coefficient of spherical particles in Newton's regime, independent of the Reynolds number [11]. A homogeneous approach to simulate multiphase flows relies on the assumption of a mixture, simply characterized by a volume density = + (1 − ) in which the velocity, turbulence fluctuations and temperature are shared homogeneously. With such assumptions, one can rearrange equations (4) and (5) to obtain the Navier-Stokes equation of the mixture: ; ) Experimental Cases and their Numerical Representation Two different 2-D geometries are proposed for comparison with simulations ( Figure 1). The first geometry corresponds to a cavitation tunnel studied by Leroux and Astolfi [13][14][15], in which a chord NACA hydrofoil is positioned at a 6° angle of attack. The second case consists of an 8° throat venture geometry studied by Barre and Aeschlimann [16,17]. For both cases, space and time variables are hereafter normalized respectively with reference length and convective time-scale as / and * = /( / ∞ ). Boundary conditions and numerical representation To correctly represent both experimental setups, boundary conditions are set with great care after relevant validations [18]. As shown at the left of Figure 1, 20 chord length extensions are set upstream and downstream of the hydrofoil. The walls of the tunnel are modeled as slip free walls, without considerations for viscous effects. A total pressure condition is set at the inlet along with an averaged absolute pressure outlet, controlling the value computed which is defined as: where the actual velocity, and thus Reynolds number, are results of the simulation. For the venturi geometry, a velocity inlet is used along with an averaged absolute pressure outlet. The inlet total pressure therefore becomes part of the solution. In this work, regimes of both sheet and cloud cavitation are simulated on the foil geometry (cases "Sheet-foil" and "Cloud-foil"), while the venturi geometry is simulated only in an unstable manner (case "Cloud-venturi"). For a quick review of the regimes of cavitation, one can refer to the experimental works of Arndt or Leroux [13]. The specific details of the experimental and numerical setups are given below in Table 1. Both numerical setups along with their respective boundary conditions are shown in Figure 1. For the foil geometry, the proposed numerical mesh is composed of nearly 146 000 elements and allows for a good resolution in the cavity area and in the wake region. For the venturi geometry, the retained structured mesh contains a total of 50 000 elements and is also properly adapted in the vapor cavity area. To assess the unsteady behavior of the cavitating flow, pressure probes are positioned in both cases as in the experiments. The unsteady simulations progress in time at a reduced discrete time step of * = 0.01. Each unsteady simulation is initialized with the steady state, single phase flow solution in absolute pressure conditions. During the simulation, probes capture an absolute pressure signal that is transformed into a pressure coefficient. The time-averaged and RMS values are then calculated by eliminating the transient time of the cavity formation [18]. For all cases, the proper statistical convergence of unsteady flow characteristics was systematically verified. The simulations performed under the venturi geometry use a "High resolution" advection scheme while calculations done on the foil geometry use a 2 nd order upstream advection scheme. Depending on the ease of convergence, adjustments were done on the residuals convergence criteria. A thorough validation of the methodology proposed here has been carried out and is presented in [18]. Homogeneous Simulations of Sheet and Cloud Cavitation When simulating sheet cavitation with a homogeneous approach, one rapidly notices that certain aspects of the physics associated to the re-entrant jet underneath the vapor cavity are discarded due to the underlying assumptions of a shared momentum field. In Figure 2 for example, the flow field surrounding the sheet cavitation vapor cavity is shown both with velocity streamlines (left) and with the reduced vorticity (right). To facilitate interpretation of Figure 2, arrows have been added to point to the areas of interest. Near the leading edge of the foil at the left, the flow separates slightly because of the incidence which leads to the formation of a region of pure vapor inside the separation bubble (α v ≈ 1). In the closure region of the cavity, the re-entrant jet is visible as the liquid flowing near the wall moves toward the leading edge of the cavity. At that particular location, an important velocity gradient is created between the low velocity re-entrant jet, near the wall, and the free stream velocity over the cavity. One can see at the right of Figure 2 that in the homogeneous approach, the boundary layer develops on the hydrofoil wall as it would in a non-cavitating simulation. In the closure region of the cavity, where an adverse pressure gradient allows the pressure to reach noncavitating conditions, the shear layer detaches from the foil. Finally, one can note that the region under the separated shear layer contains positive vorticity (w z > 0), related to the presence of the re-entrant jet. One could argue that the physics simulated here is not representative of what would be observed in experiments. The presence of vapor at the leading edge would indeed modify the effective body of the foil. The boundary layer, forming at the leading edge, would then develop on top of the liquid-vapor interface and contribute to the formation of the re-entrant jet by the diffusion of vorticity in the cavity closure region. This would indeed be closer to the experimental observations of Franc and Michel [19,20], Callenaere [21] and Kawanami [22]. However, even if the local flow field surrounding the cavity is not precisely simulated with respect to the existence of a modified effective body, the time-averaged pressure distribution and the RMS distribution of the pressure fluctuations for both sheet and cloud cavitation on the foil are found to match fairly well with experiments, as shown in Figure 3. It can be observed in Figure 3 that the time-averaged pressure distributions match quite well with the experimental data obtained by Leroux for both cases. As is the case with experiments for sheet cavitation ( σ= 1.34), inside the cavity the pressure is mostly constant and equal to the vapor pressure (C p ≈ − σ). When going further toward the trailing edge, an adverse pressure gradient allows the pressure to reach non-cavitating conditions. Again, as was observed experimentally, this recompression is associated with an increase in the pressure fluctuations amplitude, which is greatest in the cavity closure region. Regarding the "Cloud-foil" case, one can notice that even though the shape of the pressure distribution matches fairly well with experiments, it is slightly closer to the non-cavitating pressure distribution than to experiments. At the bottom of Figure 3, one can see for the "Sheet-foil" case that the amplitude of the fluctuations is over predicted from the cavity closure (x/L = 0.5) to the trailing edge of the foil. As mentioned by Leroux, inside the cavity, the pressure fluctuations are mostly constant and equal the fluctuations measured in non-cavitating conditions. These are well predicted by the simulations. However, for the case of cloud cavitation, the pressure fluctuations are greatly over predicted in the vicinity of the leading edge (x/L < 0.4). From mid-chord to the trailing edge, the fluctuations distribution is again over predicted but the shape of the latter better matches the experimental data. For both cases, experimental pressure signals were measured on the foil, which helps to gain a better understanding of the computed and experimental physics. For the simulated cases, FFT analyses allow the frequency domain content of the time domain signals to be obtained. The resulting signals and frequency contents for the case of sheet cavitation on the foil, at resulting signals and frequency contents for the case of sheet cavitation on x/L = 0.4 and x/L = 0.5 in the cavity closure region, are presented in Figure 4. Figure 4 that the unsteady flow that is simulated is not in agreement with what is being observed experimentally. The pressure signal obtained in the simulation contains high amplitude, low frequency content that is shown in the left of Figure 4 with black arrows. In the right of Figure 4, the black arrow shows the frequency corresponding to this movement at f = 3.10 Hz. We can also observe that the experimental data does not possess this low frequency behavior. In experiments, the signal contains energy at medium frequencies (f = 18.75 Hz) plus weaker fluctuations at higher frequencies. As shown in the right of Figure 4, the numerical signal also contains energy at higher frequencies with great amplitudes. This tends to create a camber in the frequency spectrum from around f = 60 Hz and above, as shown with the red lines and arrows. It appears from One can also note on the upper left plot of Figure 4 that the pressure coefficient at x/L = 0.4 varies between the value without cavitation (shown with the dashed green line) to a value slightly above the vapor pressure (C p = −σ , in the dashed blue line). This suggests a movement of the cavity closure caused by the instability of attached cavitation. It also suggests that the simulated cavity possesses two different pulsating behaviors. First, the cavity shows a large movement of its closure position, generating the fluctuations of lower frequencies. Secondly, it appears that the pressure fluctuations of higher frequencies are not caused by the movement of the cavity closure region itself but rather by the whole flow around the foil. Unsteady visualizations of the numerical simulation help to validate this last point. The same timefrequency signal analysis is proposed below in Figure 5 for the case of cloud cavitation on the foil. As one can see in the middle of Figure 5 (x/L = 0.5), the selfoscillating behavior of the vapor cavity is easily visible as the pressure oscillates from the saturated pressure value (C p = − σ, blue dashed line) to the value without cavitation (dashed green line). The phenomenon repeats itself at a frequency of f = 2.96 Hz (shown with the black arrow at the right of Figure 5) and is associated with the collapse of the cavity which generates a strong pressure pulsation (pointed with a black arrow, left of Figure 5). In the experiments, this behavior is characterized by a frequency of f = 3.5 and leads to a strong fluid-structure interaction phenomenon. The latter can be explained by the important quantity of energy that is contained in the pressure fluctuations and their harmonics, as shown at the right of Figure 5. Regarding the computations done on the venturi geometry, the flow field surrounding the instantaneous vapor cavity of case "Cloudventuri" is shown in Figure 6. One can see at the left of Figure 6 that the location, shape, and amount of vapor appears to be in relative agreement with the experimental observation, if one excludes the cloud of bubbles that was just shed by the main cavity. However, one can also see at the right of Figure 6 that calculations are not as successful in predicting the timeaveraged and unsteady characteristics of the cloud cavitation regime in the venturi geometry as it is in the case of the foil geometry. The unsteady shedding behavior of cloud cavitation was reproduced in the simulations at a frequency of 19.2 Hz, well below the measurements (45 Hz). The phenomenon rapidly appeared very energetic, which caused numerical instability problems. As was the case with "Cloud-foil", but now with a greater importance, the pressure pulsation that periodically appears (as pointed with a black arrow, left of Figure 5), leads to the over prediction of both the time-averaged and the RMS of the pressure shown in Figure 6 (as it was the case for "Cloud-foil", bottom right of Figure 3). A certain amount of effort in assessing the best practices from the literature to produce more accurate results was done, as proposed by Zwart [9] and Coutier-Delgosha [5,6,15], but without any success [18]. For all cases studied in this work, despite being able to predict the shape and location of vapor and the physical mechanisms in cause, the methodology proposed does not allow the accurate prediction of the unsteady flow characteristics caused by attached cavitation. The model's inability might cause this to reproduce the underlying physics of the re-entrant jet, as it was discussed in relation to Figure 2. The next sections now present simulations performed with the inhomogeneous approach on a case of sheet cavitation to identify what physics the consideration of an interface may induce. Comparisons with the Inhomogeneous Approach By using the inhomogeneous model included in ANSYS CFX (described in equations 4 and 5), a higher case ( = 1.72) of sheet cavitation is simulated on the foil geometry. Numerical stability rapidly became problematic with the inhomogeneous approach. It was indeed not possible to simulate lower cases because of numerical divergence. For the two new cases of the flow around the foil, the time step is set to * = 0.05 and a lighter mesh of 50 000 elements is used, both to improve numerical convergence. For both cases, the statistical convergence of unsteady flow characteristics was validated. The comparison of both homogeneous ("HOM") and inhomogeneous ("INH") cases of the foil at = 1.72 are presented below in Figure 7 with the reduced vorticity and vapor volume fraction contours. It appears that with the homogeneous approach, only a small region of the leading edge is filled with pure vapor. This is in contrast with the inhomogeneous approach in which a larger amount of vapor is found at the leading edge of the foil, which also better fits the qualitative experimental observations of Leroux [13]. For reminder, one of the highlighted problematics of the homogeneous approach, as identified in this work, is the lack of modification to the effective body by the presence of vapor. As observed at the right of Figure 7, the vorticity contours clearly show that with the inhomogeneous approach, the presence of vapor modifies the effective body encountered by the liquid flow, as the boundary layer develops at the water-vapor interface. As shown below in Figure 8, even though the resulting time-averaged pressure distributions are quite similar, the resulting different flow topology around the cavity leads to a significantly different RMS distribution of the pressure fluctuations. One can see in the left of Figure 8 that both cases "HOM" and "INH" give slightly different time-averaged pressure distributions inside the vapor cavity. Resulting from this, the adverse pressure gradient, which allows the flow to return to non-cavitating conditions, is stronger when using the inhomogeneous approach. One can finally observe that the flow simulated with the homogeneous approach is very calm and generates almost no pressure fluctuations while considerable fluctuations are induced in the closure region with the inhomogeneous approach, possibly caused by the greater adverse pressure gradient. It can be seen at the top right of Figure 8 that inside and downstream of the vapor cavity, the liquid velocity is greatly reduced as the effective body of the foil is modified. As shown in the bottom right of Figure 8, the resultant effective body leads to the detachment of the boundary layer, which rolls up and diffuses over the vapor cavity, as was experimentally observed by Gopalan and Katz [23]. On the other side, it appears that the small and late separation of the shear layer, downstream of the cavity in case "HOM", only leads to a slight increase in the turbulent energy level. While no experimental data are available for validation at σ= 1.72, the present investigation is quite revealing when considering the simulated physics. It was demonstrated that the modification of the body is only effective when using the inhomogeneous approach as per Figure 7, on the vorticity plot, showing the shear layer developing on the interface rather than at the wall. In the homogeneous approach, the two-phase mixture is indeed considered as a continuous medium. Although no detailed time evolutions of the pressure signal were available, the vorticity injected by the detached shear layer is expected to induce a richer dynamic behavior in the closure region of the cavity, which is suspected to play a considerable role in the spectral response of unsteady cavitation. This also suggests that locally in the cavity area, the hypothesis of a continuous medium may not be appropriate. In fact, close to the foil, the presence of vapor tends to modify the effective body encountered by the liquid flow. The unsteady physics resulting from this phenomenon may therefore only be characterized with an inhomogeneous approach as they are partly associated with the vortical motion caused by the resulting modified body. Thus, it appears that the consideration of an interface might be critical in assessing the fine characteristics of unsteady cavitating flows. Conclusion Through this work, the capacity of the homogeneous approach to solve cavitating flows and predict the resulting unsteady flow characteristics, when used with the Rayleigh-Plesset cavitation model, has been assessed. By performing simulations on two relevant geometries, it was shown that the location, the shape and amount of vapor was always qualitatively close to experimental observations. For the foil geometry, the proposed methodology resulted in accurate time-averaged results for both sheet and cloud cavitation regimes. However, for the venturi geometry, both time-averaged and unsteady pressure characteristics were amplified by the numerical pressure pulsations associated with the collapse. It was further observed that the homogeneous approach greatly simplifies the flow field surrounding the cavity, whereas the body encountered by the flow is not modified by the appearance of vapor at the leading edge. It is suggested that the vortical motions associated with the modified body would contribute to the proper unsteady response of cavitation. By using a more complex numerical implementation, solving the flow as an inhomogeneous medium, it was found that with such a model, both phases share an interface. Therefore, the effective body of the foil becomes altered by the presence of vapor, which leads to a more complex but more physically relevant flow field around the vapor cavity. Numerical instability issues however limited the possibility to compare with experimental data and thus to further study the computed physics. More algorithmic developments in the simulation of inhomogeneous cavitation (with the free surface or VOF approach) should aim at rendering the method more numerically robust. This would allow comparisons of simulations of attached cavitation with detailed experimental data, and thus, the capabilities of the inhomogeneous approach could be clearly evaluated. However, from the physics at play, it clearly appears that inhomogeneous simulations of cavitation could lead to better predictions of unsteady characteristics of cavitation. However, for applications into hydraulic machinery, it appears not possible now to accurately predict unsteady flow characteristics caused by cavitation in a robust and general manner.
2019-04-22T13:04:14.719Z
2016-12-10T00:00:00.000
{ "year": 2016, "sha1": "01548d5682d7e547029e85c283c8bb6f7d94188c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2168-9873.1000242", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4015e8ae50a91446313f6c35391d24deae483963", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271063657
pes2o/s2orc
v3-fos-license
Quantified planar collagen distribution in healthy and degenerative mitral valve: biomechanical and clinical implications Degenerative mitral valve disease is a common valvular disease with two arguably distinct phenotypes: fibroelastic deficiency and Barlow’s disease. These phenotypes significantly alter the microstructures of the leaflets, particularly the collagen fibers, which are the main mechanical load carriers. The predominant method of investigation is histological sections. However, the sections are cut transmurally and provide a lateral view of the microstructure of the leaflet, while the mechanics and function are determined by the planar arrangement of the collagen fibers. This study, for the first time, quantitatively examined planar collagen distribution quantitatively in health and disease using second harmonic generation microscopy throughout the thickness of the mitral valve leaflets. Twenty diseased samples from eighteen patients and six control samples were included in this study. Healthy tissue had highly aligned collagen fibers. In fibroelastic deficiency they are less aligned and in Barlow’s disease they are completely dispersed. In both diseases, collagen fibers have two preferred orientations, which, in contrast to the almost constant one orientation in healthy tissues, also vary across the thickness. The results indicate altered in vivo mechanical stresses and strains on the mitral valve leaflets as a result of disease-related collagen remodeling, which in turn triggers further remodeling. Among the components of the extracellular matrix, collagen is the most important as it is the main mechanical contributor to the proper functioning of mitral valve leaflets (MVL) [17][18][19] .A disruption of the collagen network leads to significantly altered biomechanics and consequently malfunction of the mitral valve leaflet.Histological examination has traditionally been the standard method of microstructure analysis in DMVD 8,10,12,15,16 .The histological analyses examine the tissue laterally, due to the thin leaflets of the mitral valve, like the schematics in Fig. 1.While it provides valuable insights, it is not adequate for a full analysis of the biomechanics and the function of the mitral valve leaflets, as these are governed by the planar organization of the collagen fibers [20][21][22][23][24][25][26] .How the collagen fibers are dispersed and oriented in the plane orthogonal to the histological sections is of decisive importance, both in terms of biomechanics and clinical implications. Various advanced imaging techniques have been employed to quantify the alignment and orientation of collagen fibers.Small-angle light or X-ray scattering (SALS/SAXS) 23,27,28 , polarized spatial frequency domain imaging (pSFDI) 24,25 , and second harmonic generation (SHG) 5,21,26 imaging are the most notable ones.SAXS and pSFDI provide a large field of view and have fast image acquisition.However, these techniques examine an aggregated signal, which means all the collagen fibers over the entire thickness of the tissue are aggregated in one acquisition plane.On the other hand, SHG provides excellent lateral and transmural resolutions, enabling a thorough examination of collagen organization, particularly in case of pronounced collagen alteration as associated with the phenotypes of DMVD.Regardless of the method of investigation, previous studies on planar distribution of collagen fibers have primarily relied on healthy tissues or tissues from animals 26,29 . In this study, we examine differences in the planar collagen alignment and preferred orientations between FED, BD and healthy tissues to demonstrate the marked alteration in collagen remodeling.To the best of our knowledge, this is the first comprehensive description of the planar collagen distribution in degenerative mitral valve disease.This is of particular interest because the planar collagen distribution dictates the mechanical behavior of the leaflets and thus the functions of the mitral valve.This study has the potential to be used for more optimal design of tissue-engineering heart valves.The results presented here can also inform an in silico analysis for more realistic modeling towards personalized medicine in heart valve surgery.The biomechanical impact of planar collagen distribution could help provide pathophysiological explanations for the observed differences between fibroelastic deficiency and Barlow's disease.It has the potential to guide treatment strategies, aimed at both achieving optimal blood flow dynamics and restoring normal biomechanics of the mitral valve 14 . Collagen fiber distribution from SHG acquisition Figure 2 shows SHG images and the corresponding quantified distribution for three representative samples of control (healthy), FED and BD.The SHG images are acquired at different depth from the ventricular side of the leaflets.Each SHG image is quantified by an angular fiber distribution through image analysis.Figure 2 illustrates significant differences in collagen fibers between the FED and BD groups compared to the control group.These differences include changes in collagen fiber alignment and the preferred orientations, which are investigated quantitatively between all the samples in control, FED and BD group. Collagen fiber alignment The collagen fiber distributions are characterized by an average fiber alignment parameter (a) that represents the degree of collagen fiber alignment.The higher parameter a means the higher alignment of the collagen fibers.As shown in Fig. 2, collagen fibers in FED and BD undergo marked changes compared to control group.Collagen fibers in the control sample (Fig. 2A-E) are highly aligned, while collagen fibers in FED (Fig. 2F-J) and BD (Fig. 2K-O) are more dispersed.The alignment of collagen fibers ( a ) based on SHG analysis was calculated for each depth and averaged between samples within each group.The result is shown in Fig. 3A.In the control group, collagen fibers were highly aligned to a depth of 300µm from the ventricular side, which also corresponds to the collagen rich fibrosa layer.For FED samples, the collagen fibers were on average less aligned than in the control samples in the fibrosa layer, indicating a higher dispersion and remodeling of the collagen fibers.In both the control and the FED samples, the alignment of collagen fibers decreased to the same level after the depth (from the ventricular side) a collagen-rich fibrosa, a spongiosa layer consisting of mostly glycosaminoglycans (GAGs), and an elastic atrialis.The entire leaflet is also populated with quiescent valvular interstitial cells (qVIC).(B) Mitral valve with fibroelastic deficiency (FED) and (C) Barlow's disease.Both diseases are characterized by fragmented collagen and elastin fibers with activated valvular interstitial cells (aVIC), exhibiting myofibroblast-like behavior. of 300µm .Interestingly, BD samples exhibit a nearly constant and low alignment of collagen fiber throughout the thickness on average, which is the same level as of control and FED samples after the depth of 300µm .This indicates that the highly aligned collagen fibers up to the depth of 300µm , observed in the control group, were more dispersed in BD samples. To compare collagen fiber alignment between samples, an averaged fiber alignment parameter is calculated for each patient over the entire measured depth and compared using the ANOVA test and boxplots (See Fig. 3B-1).For the two samples from the same patient in FED and BD, the average value between samples was used in the ANOVA test.For the healthy tissue, the average value between segments A2 and P2 was used for each subject (see Fig. 7).The control, FED and BD differ from each other in the distribution of collagen fiber alignment, but no significant difference was observed.From Fig. 3, it can be seen that the control group has distinct layers; the 300µm thick fibrosa layer with highly aligned collagen fibers ( a ≈ 6 ) and other layers with less alignment and higher dispersion ( a ≈ 2 ).The FED showed less alignment in the fibrosa layer but was distinct from other layers.However, there was no difference in the orientation of collagen fibers across the thickness in BD.The detailed collagen fiber alignment throughout the depth is shown in Table 1.The average of collagen fiber alignment ( a ) for control samples was 3.54 ± 0.79 versus 2.8 ± 0.79 and 2.15 ± 0.43 in FED and BD, respectively.Moreover, the alignment of collagen fibers in healthy tissue is significantly higher up to a depth of 300µm , corresponding to the fibrosa layer, as shown in Fig. 3A.As presented in Fig. 3B-2, the control group has significantly higher collagen fiber alignment compared to the BD. Furthermore, for each pair of depths z and z′ , the bivariate correlation function C(Z, Z′) for the collagen fiber alignment, a(z) and a(z′) , was calculated based on Eq. ( 3) and is presented in Fig. 4. The control group has a broad region of higher correlation spanning a significant portion of the depth.This consistently high correlation suggests a uniform collagen fiber alignment, which likely reflects the natural, undisturbed state of collagen fiber distribution in the valve leaflet.For the FED samples, the correlation map exhibits a clear concentration of higher correlation coefficients, represented as warmer colors, around the initial and middle depths.This creates two clusters with high correlation, which suggests a strong intra-layer correlation of the collagen fibers within the fibrosa and spongiosa, while there is a weak correlation between these two layers in contrast to the control group.A similar pattern is observed in the BD state; however, the correlation values are lower than in the FED and (G, H) the angle between preferred orientations for control (black solid curve), FED (red dashed curve) and BD (blue dash-dotted curve).Plots A, C and E are the average of all samples within each group, and boxplots B, D and F are calculated averages for each patient.For collagen fiber alignment, B-1 is the boxplots from the average patient value for the entire acquisition depth, while B-2 only considers the first 300µ m, which corresponds to the fibrosa layer.In the boxplots, data outliers are represented by a ' + ' sign.www.nature.com/scientificreports/samples and at some depths there is no correlation at all.Therefore, the intra-layer correlation is weaker in BD, suggesting more diffused degeneration. Preferred orientations of collagen fibers As shown in Fig. 2, collagen fibers in the control group are oriented in a certain preferred orientation.There is only one preferred orientation of 30 • away from the circumferential direction, which is also almost constant throughout the depth.On the contrary, FED and BD samples exhibited a different collagen distribution.The collagen fiber distribution showed two distinct preferred orientations and, in some cases, e.g., Fig. 2M, it is difficult to identify a clear preferred orientation for collagen fibers.Moreover, the preferred orientations vary by depth within FED and BD groups. The averaged variations of the first and second preferred orientations are presented in Fig. 3C and E, respectively, with the numerical value listed in Table 1.Similar to collagen fiber alignment, an averaged first and second preferred orientation is calculated over the entire measured depth for each patient, and compared using the ANOVA test and boxplots (see Fig. 3D and F).There is a significant difference between all groups in both the first and second preferred orientation.Interestingly, the two preferred orientations of the control group were close to each other, in contrast to FED and BD.Therefore, it seems reasonable to assume there is a single family of collagen fibers in the control group, while there are two distinct families of fibers in FED and BD.Table 1.Parameters for the collagen fiber distribution for healthy, FED and BD samples based on depth from the ventricular side.Values are reported as mean (one standard deviation). Collagen fiber alignment, a(−) First preferred orientations ( The blank area is due to missing information on one or more samples at that specific depth (see Fig. 5). To provide an unbiased test of this hypothesis, the difference between two preferred orientations at each depth was calculated for each sample and then averaged within each group, as shown in Fig. 3G.The boxplot in Fig. 3H is based on the averaged sample difference between two preferred orientations.The angle between preferred orientations for FED and BD was significantly different from control, while there was no significant difference between FED and BD.The median of the difference between two preferred orientations for FED and BD was 43 • and 55 • , respectively, while this value for the control was 13 • .It is evident that collagen fibers in FED and BD have a larger angle between the preferred orientations, making the two preferred orientations more pronounced compared to the control group. Distribution map of collagen fibers A distribution map of collagen fiber content, differentiated by orientation and depth can be generated.The amount of collagen content can be represented by color.The results for all the investigated samples are presented in Fig. 5.The main preferred orientation, with the highest collagen content (indicated by the yellow color in Fig. 5), remains almost constant through the thickness for most of the control samples.However, for BD samples, it varies.For FED samples, some, e.g., Fig. 5K and O, exhibit an almost constant preferred orientation, similar to healthy tissues, while the rest display varying preferred orientations that is similar to BD samples.The same results are also evident from averaged plot in Fig. 3C and E. In conclusion, the healthy samples can be represented by one preferred orientation, while for BD and most of the FED samples, there are two preferred orientations. Comparison of SHG versus histological analysis In addition to SHG analyses, histological analyses were also performed for several samples.Figure 6 shows the histological analyses for three representative sample compared to their corresponding quantified collagen fiber alignment based on SHG.Histological analyses (Fig. 6G,H,K and L) show that collagen and elastin fibers and the layers are disrupted in FED and BD and that there is an accumulation of GAGs in BD (Fig. 6J).Recently, it has been demonstrated that in DMVD a fibrous layer, called superimposed tissue (SiT) layer, is formed on the original leaflet 16 possibly induced by the abnormal mechanical stresses 30 .In this present study, a SiT layer was found in the BD sample on the ventricular side (Fig. 6I-L). As shown in Fig. 6, collagen fibers in the control sample were highly aligned to a depth of 300µm from the ventricular side.Based on the histology section, this region corresponds to the fibrosa layer, the collagen-rich layer of MVL (Fig. 6A-D).While collagen fibers are present in other layers, particularly in the atrialis (Fig. 6D), SHG analyses showed that these fibers are less aligned and more dispersed outside the fibrosa layer (Fig. 3A and Table 1).It is unlikely that this information can be derived from histological analyses. For the representative FED sample, SHG analyses indicates a low collagen fiber alignment across the thickness of the leaflet.Several regions exhibit a random distribution without any preferred orientation, i.e., a = 0 .Compared to control samples, FED samples lack regions with high alignment, even though collagen fibers are scattered up to 650µm depth (Fig. 6H).These findings are consistent with SHG analyses, showing dispersed collagen fibers to a depth of 700µm.For the BD sample, SHG analyses revealed low collagen fiber alignment at 0 − 300µm depth followed by increased alignment of collagen at a depth of 300 − 800µm , although more dispersed compared to the control sample.The histological analyses indicate that these two regions correspond to the SiT and the fibrosa layer (Fig. 6I, J, L). Discussion Collagen fibers are the main contributor to the structural integrity and mechanical competency of MVL [17][18][19] .In DMVD, MVL is associated with a disrupted microstructure of collagen fibers 8,12,15,16 .This is in turn associated with mechanically incompetent leaflets 4,5 .While the predominant traditional investigation method has been histopathological analysis, this technique primarily involves transmural sections and lateral examination.However, MVL is a planar structure with collagen fibers arranged in layers orthogonal to the histological sections.Therefore, histopathological analysis offers limited insight into how collagen fibers are distributed in each layer and how the orientation varies between these layers.This information is of particular interest for biomechanics and therefore the function of the MVL is determined by the alignment of collagen fibers in the planar organization [20][21][22][23][24]26,31 . In this study, we used SHG as an artifact-free 3-dimensional imaging modality of collagen fibers in the main phenotypes of DMVD, FED and BD.We then used appropriate image analysis methods to quantify collagen organization at each layer up to 900µm depth from the ventricular side.We focused on the collagen fiber align- ment and the preferred orientation of collagen fibers. Previous studies have demonstrated increased synthesis and degradation of collagen fibers in myxomatous degeneration [32][33][34][35][36] .The present study shows that FED and BD have lower alignment in the fibrosa layer and two distinct preferred orientations.The lower alignment is possibly due to degradation and the secondary orientation can be attributed to the remodeling in an attempt to reinforce the tissue.The secondary direction also suggests that the mechanical stresses and strains on the leaflets are altered in the in vivo state of FED and BD, as growth and remodeling occur in the direction of the mechanical strain 37,38 . The present study also provides valuable information on the mechanics of the diseased MVL.With today's more conservative tissue resections, it is difficult to obtain sufficiently large samples for mechanical testing, which represents the gold standard for material modeling of biological tissues.However, previous studies on animal tissue and healthy mitral valves concluded that the tissue mechanics is determined by the distribution and strength of collagen fibers [20][21][22][23][24]26,31 . In ou previous study on the mechanical behavior of MVL from one FED and one BD patient, we found considerably weaker collagen fibers, compared to other studies on healthy MVL 5 , which is to be expected from histological analyses due to collagen fragmentation 8,10,12,15,16 .This study now provides a detailed collagen fiber distribution of eighteen patients.The weaker and more dispersed collagen fibers in FED and BD result in higher mechanical strains in all directions, particularly circumferentially, where collagen fibers of healthy tissue are aligned (Fig. 2A-E, Fig. 3C and E).Higher strain leads to the activation of valvular interstitial cells, which in turn leads to more remodeling [37][38][39][40] . Moreover, the present study showed for the first time that there may be a secondary family of collagen fibers in diseased MVL.Understandably, this has not been reported before as histological analyses of microstructure in DMVD offer only a lateral view, which is limited in terms of biomechanical analysis.This finding is particularly important in the novel in silico studies investigating mitral regurgitation, [41][42][43] , as a quantified analysis of the microstructure in DMVD is a necessity.In addition, the current study will enable a more accurate mechanical modeling of the MVL 5 .Combining the findings of the current study with in silico analysis, may provide further insights into how the quantified remodeling leads to mitral valve prolapse and potentially guide future treatment.Understanding the complex relationship between biomechanics, perturbed collagen structure and disease states could have significant implications for providing long-term repair strategies 6 .While current surgical interventions for primary MR have demonstrated success in restoring valve competency and enhancing blood flow dynamics, it remains unclear whether this will reverse microstructure remodeling of the mitral valve leaflet 6,14 .A more comprehensive understanding of the remodeling processes in FED and BD could facilitate customized repairs, aiming not only to achieve optimal blood flow dynamics but also to restore the normal biomechanics of the mitral valve. There is an ongoing debate as to whether BD and FED should be treated as two separate diseases or as variations within the same spectrum of disease.In this study, we used advanced imaging and image analysis to assess the remodeling of collagen fibers in MVL as the most important mechanical factor.The results showed similar remodeling of FED and BD, but with less alignment in BD.The increased remodeling observed in BD with the resulting deterioration in mechanical properties may be a factor contributing to the complexity of the disease and the higher reoperation rate observed compared to FED 44,45 .Furthermore, BD is characterized by diffuse myxomatous degeneration, while FED is associated with localized myxomatous degeneration.Studies have demonstrated that myxomatous leaflets are less stiff and more extensible, correlating with the low alignment observed in this research 4,5 .BD tends to develop earlier in life than FED, which is considered age-related 14 .The chronicity of the disease could potentially explain the difference in the extent of collagen alignment.Although different gene expression patterns were identified in the mitral valve leaflets of FED and BD, a significant overlap in gene expression was found, possibly indicating compensatory changes 46 .However, further genetic studies and immunohistochemical studies of the molecular and cellular events are needed 3 .For example, GAGs play a major role in the increased thickness of BD samples (Fig. 6J) and may contribute to layer delamination and other mechanical incompetency 6,[47][48][49] .Nevertheless, the results of the current study suggest that FED and BD, regardless of whether they are the same or different diseases, both result in similar collagen remodeling.Because collagen is the main load carrier, remodeling will result in further alteration of the mechanical leaflet strains, and therefore most likely cause further remodeling, setting a vicious cycle.Nevertheless, the causal relationship between DMVD remodeling and the occurrence of valvular regurgitation remains unclear. One limitation of the current study may be the number of samples but should be seen in relation to the timeand resource-intensive process of sample preparation, image acquisition and processing.It is worth noting that the investigation of each sample consists of more than 150 images throughout the thickness.Therefore, more than 1500 images from FED and 1500 images from BD were investigated in this study.Another limitation is that the BD samples are significantly thicker than those from the control and FED groups.The imaging depth of 900µm only covers the SiT and the fibrosa in the BD sample (Fig. 6I-L) or fibrosa and part of the spongiosa in the absence of SiT, while it covers the entire tissue thickness in the control and FED samples.To mitigate this, in addition to variation across thickness (Fig. 3A,C,E and G), we also investigated the pool of quantified variables using the boxplots (Fig. 3B,D,F and H).The collagen-rich fibrosa layer, which is the main load bearer, has been included in all the groups.Another limitation is that one must be careful when interpreting the absolute angle values (Fig. 3C and E).Even if the orientation of the samples and the circumferential direction were recorded at the time of explantation and the sample for SHG was mounted with special care for the orientation, the absolute angle value is prone to error.For this reason, we also investigated the angular difference between the first and www.nature.com/scientificreports/second preferred orientation at each depth for each sample as an unbiased methodology (Fig. 3G).After all, MVL is not a passive spectator.The transition zone from the annulus to the leaflets expresses smooth muscle α-actin in healthy tissue 50 , and the valvular interstitial cells transform into an activated myofibroblast-like phenotype, expressing smooth muscle-associated contractile proteins 12,51,52 .While the study focused on collagen fibers, as the main component of the passive behavior of the MVL, further research is needed to investigate how DMVD affects the active behavior of the MVL and mitral apparatus. In conclusion, this study is the first to examine the planar collagen remodeling of MVL in DMVD.In contrast to cross-sectional histological analyses, planar examination is important to correlate the microstructural disruptions in DMVD with the mechanical deficiency of MVL leading to MR. BD is associated with a complete dispersion of collagen fibers in the fibrosa layers, while FED is less dispersed, yet more than in healthy tissue.In both FED and BD there was evidence of what needs to be characterized as a secondary family of collagen fibers.We believe that this study provides a deeper understanding of the interplay between DMVD, microstructural disruption and the biomechanics of MVL, which is central to both pathophysiologic understanding and the development of future treatment strategies. Sample acquisition The study population consisted of eighteen patients undergoing mitral valve surgery for severe MR, of which nine were diagnosed with FED (age 62 ± 11 yrs, two females and seven males) and nine with BD (age 57 ± 11 yrs, five females and four males).Samples from three deceased individuals (age 52 ± 12 , two females and a male) acted as controls.BD was defined by echocardiographic examinations and intraoperative analysis as leaflet thickening with diffuse myxomatous degeneration, annular dilatation and billowing of several leaflet segments, whereas FED was characterized by leaflet prolapse due to chordal rupture in the absence of myxomatous degeneration. Mitral valve tissue was obtained from valvular excisions during elective mitral valve surgery, by repair (n = 17) or replacement (n = 1).The mitral valve repair procedures included annuloplasty, valvuloplasty with resection, and neo-chordae implantation.Ten samples were acquired from each group of FED and BD patients (two samples were excised from the same patient once in each group), amounting to a total of twenty DMVD samples.Upon explantation, the samples were marked according to their orientation, with the circumferential direction defined as parallel to the annulus and the radial direction as orthogonal to the circumferential direction.The samples were then snap frozen and stored in the Bergen Cardiovascular Biobank (ID: 2014/828).For analysis, samples were transported in a liquid nitrogen dry shipper (Worthington CX100, US) and stored in the dry shipper until thawed for analysis.Control mitral valves were obtained post-mortem from patients undergoing autopsy and with no known prior cardiac disease.A total of six samples were obtained from three individuals, one from the P2 and one from the A2 segment from each patient (Fig. 7).After excision, the control mitral valves were transported to the testing facility using the same protocol as for surgically excised tissue. The study was approved by the Regional Committee for Medical and Health Research Ethics (Project ID: 2016/1132 and 2016/2073).All examinations were performed according to the rules for the investigation of human subjects set out in the Declaration of Helsinki.All study participants provided written informed consent. Sample preparation Before analysis, the samples were thawed at room temperature and submerged in 1 × PBS.The samples were then fixed in a 4% formaldehyde solution for 18 h.After tissue fixation, several samples were dissected into two sections: a radial section for histopathological analysis and another for SHG imaging of collagen structure.The rest were only used for SHG imaging of collagen fibers due to their smaller size. The specimens for histology underwent a continued fixation process, followed by graded ethanol dehydration and paraffin embedding.A 5 m thick section was then cut from the specimen and stained with Hematoxylin, Eosin and Saffron (HES), Elastin for elastin, Alcian Blue for glycosaminoglycans (GAGs) and Masson Trichrome for collagen.After tissue fixation, the SHG samples were chemically cleared with either 1:2 Benzyl Alcohol:Benzyl Benzoate (BABB) solution 53 or SeeDB clearing 54 .SHG imaging is usually limited to in-depth imaging up to 200µm from the surface, however imaging depth can be increased substantially by utilizing tissue clearing 55 .The tissue clearing renders the tissue translucent and allows in-depth tissue imaging using SHG, with minimal morphological alteration. In BABB clearing, the samples are dehydrated based on graded absolute ethanol; 50%, twice 70%, twice 95% and twice 100%, each step lasts 30 min.Then, the sample is immersed in 1:1 absolute ethanol:BABB solution for 4 h and finally in BABB solution for 18 h.In the SeeDB clearing process, the sample is first incubated at 25 °C in solutions of 20%, 40%, and 60% (weight:volume) fructose in 0.1 × PBS.It is then immersed for 12 h in an 80% concentration solution followed by another 12 h incubation in 100% (weight:volume) fructose in distilled water.Finally, the sample is incubated with fully saturated fructose solutions in distilled water for 24 h.During the incubation process, the solution is rotated at 4 rpm using a tube rotator (PTR-35 Grant Instruments, UK). Histology and second harmonic generation microscopy The histological sections were scanned with the Olympus VS120 slide scanner (Olympus, Japan) at 20 × magni- fication.SHG imaging was performed using Lecia SP8 (Leica Biosystems, Germany) with Leica HCX IRAPO 25 × objective and numerical aperture of 0.95.The laser excitation was set at 890 nm to excite the collagen fibers.Collagen imaging was performed from the ventricular side of MVL up to a depth of 900µm toward the atrial side through the thickness, with z-steps of 5 µm , as illustrated in Fig. 8A.To compensate for the scattering, the laser power was increased linearly.For the samples with histological sectioning, SHG imaging was performed near the side from which the histological section was dissected.The image field of view was 465µm× 465µm with a resolution of 1024 × 1024 pixels (Fig. 8B). Quantification and statistical analysis We use image analysis to determine the degree of collagen fiber alignment and preferred orientations through the thickness.To determine the distribution of collagen fibers in each image (Fig. 8C), a 2D Tukey window is applied to the image, which is then Fourier transformed and multiplied by its conjugate complex to find the power spectrum density.This allows the fiber direction to be distinguished based on frequency and orientation.Then, a wedge-shaped filter with a range of − 89°-90° and an increment of 1° is used to extract the fiber orientations at a certain angle θ.Finally, to smooth the data, a moving average filter with a range of 7° is applied to the angle θ.In this way, we quantify the collagen fiber distribution for each image at a specific depth, as shown in Fig. 8C.The distribution is then fitted by two families of fiber using a von Mises distribution (Fig. 8D) defined as Here ρ vm (θ) is the von Mises distribution characterized by α and a, the mean fiber angle and fiber alignment parameter, respectively, and the subscript denotes the first or second fiber family, while w is the weighting factor with values from 0 to 1 and I 0 is the zero-order modified Bessel function of the first kind. We can now define an average fiber alignment parameter that represents the degree of collagen fiber alignment at each depth, i.e.The higher the value of a , the higher the alignment and the lower the dispersion.We also define the isotropic distribution, similar to a previous study 56 in which the alignment of collagen fibers was set to zero and no preferred orientation was assigned. Each SHG acquisition at a specific depth has a collagen density plot (Fig. 8D).This value can also be displayed in color and then the distribution of all images can be stacked to create a 3D distribution map showing the content of collagen fibers at a specific depth and in a specific orientation, as shown in Fig. 8E. Statistical analysis To compare the distribution of planar collagen fibers in control, FED and BD, the quantified averaged degree of alignment ( a ) and the preferred orientations ( α 1 , α 2 ) were determined for each sample at every depth.The parameters for each specific depth were then averaged across samples within each group.Additionally, the difference between the two preferred orientations at each depth was calculated for each sample and subsequently averaged within each group.This approach ensures an unbiased examination, as the absolute value of the preferred orientation differs from one specimen to another. Moreover, an average value over the entire depth is calculated for each patient.For the patients with two samples, the average value of the two samples was used.Then, the averaged values were compared between groups using the ANOVA test.In the entire statistical analysis of the fiber orientations, we leave out all isotropic layers, i.e., a = 0 and therefore no preferred orientation.We use circular statistical analysis because the orienta- tion data are circular in nature and all computations are performed using custom scripts in MatLab and circStat package in MatLab 57,58 . To quantify the correlation of collagen fiber alignment at different depths of the mitral valve leaflets, we utilized the bivariate correlation function C(Z, Z′).For each pair of depths z and Z′ , the bivariate correlation function is calculated as Here, X(z i ) and X(z′ i ) represent values of measurements of collagen fiber alignment at depths z and z′ for the i-th sample, X(z) and X(z′) are the mean values of these measurements at the respective depths and n is the total number of samples.The resulting function C(z, z′) provides a matrix in which each element quantifies the degree of correlation between collagen alignments at two specific depths.The values of this function range from − 1 to + 1, indicating perfect negative correlation on one side, perfect positive correlation at the other, and zero indicating no correlation, allowing detailed spatial analysis of correlation patterns within the tissue.(2) a = a 1 w + a 2 (1 − w). Figure 1 . Figure 1.(A) Illustration of the mitral valve.Mitral valve leaflets are comprised of three main distinct layers; (from the ventricular side) a collagen-rich fibrosa, a spongiosa layer consisting of mostly glycosaminoglycans (GAGs), and an elastic atrialis.The entire leaflet is also populated with quiescent valvular interstitial cells (qVIC).(B) Mitral valve with fibroelastic deficiency (FED) and (C) Barlow's disease.Both diseases are characterized by fragmented collagen and elastin fibers with activated valvular interstitial cells (aVIC), exhibiting myofibroblast-like behavior. Figure 3 . Figure 3.The variation across thickness and boxplots of (A, B-1 ,B-2) collagen fiber alignment, (C, D) first and (E, F) second preferred orientations,and (G, H) the angle between preferred orientations for control (black solid curve), FED (red dashed curve) and BD (blue dash-dotted curve).Plots A, C and E are the average of all samples within each group, and boxplots B, D and F are calculated averages for each patient.For collagen fiber alignment, B-1 is the boxplots from the average patient value for the entire acquisition depth, while B-2 only considers the first 300µ m, which corresponds to the fibrosa layer.In the boxplots, data outliers are represented by a ' + ' sign. Figure 4 . Figure 4. Bivariate correlation function C(Z, Z′) of collagen fiber alignment at different depths from the ventricular side, visualized as a color map for FED, BD and control group, where a value of 1 indicates a strong positive correlation and -1 a strong negative correlation.The blank area is due to missing information on one or more samples at that specific depth (see Fig.5). Figure 5 . Figure 5. 3D distribution map of collagen fiber content differentiated by angle (x-axis) and depth (y-axis) for (A-J) BD, (K-T) FED and (U-Z) the control group.F and G are from the same BD patient and Q and R are from the same FED patient.The order of the letters corresponds to the order of the patient IDs.For the control, U, W and Y are from the anterior leaflets of the first, second, and third subjects, respectively.V, X and Z, in turn, are from the posterior leaflets of the first, second, and third subjects, respectively. Figure 6 . Figure 6.Histopathological analysis of a representative (A-D) control, (E-H) FED and (I-L) BD MVL samples and their respective collagen fiber alignment from SHG image processing.All the samples are from P2 segment.The histological sections are stained with (A, E, I) hematoxylin-eosin saffron (HES) for general examination, (B, F, J) Alcian Blue for glycosaminoglycans (GAGs), (C, G, K) Elastin for elastin fibers and (D, H, L) Masson Trichrome for collagen fibers.Superimposed tissue layer (SiT) is observed in the BD sample indicated with SiT,see (I-L).Note that histological sections are sectioned transmurally for a side-view, while SHG examines the planar arrangement of collagen fibers in the orthogonal plane with respect to the histological sections.SHG imaging acquisition is performed on the side where histological sections are dissected to allow comparison between SHG and histological analyses.The scale bar is 200μm. https://doi.org/10.1038/s41598-024-65598-w Figure 7 . Figure 7. Illustration of the mitral valve showing the three scallops of the posterior leaflet (P1-P3) and the corresponding segments of the anterior leaflet (A1-A3).ALC refers to the anterolateral commissure and PMC refers to the posteromedial commissure. Figure 8 . Figure 8. (A) Acquisition of planar images of collagen fibers using second harmonic generation microscopy (SHG) throughout the thickness from the ventricular side, with a z-step of 5μm.(B) Corresponding SHG images showing collagen fibers at different depths.The scale bars are 100μm.(C) Quantitative analysis showing the distribution of collagen fibers, collagen content at a specific angle for each acquisition depth.(D) Applying the circular von Mises distribution to the collagen fiber distribution to parameterize it.(E) Illustration of collagen content with color and layering of various corresponding distributions to create a 3D distribution map of collagen fiber content differentiated by angle (x-axis) and depth (y-axis).
2024-07-10T06:17:14.447Z
2024-07-08T00:00:00.000
{ "year": 2024, "sha1": "b1e6e211b06eb77dc180c78fb7e28446a1924a90", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ee90a162c6e890e1774d91ed1a2be9ca1c3d9fd4", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
246333281
pes2o/s2orc
v3-fos-license
Improvement of Gas Barrier Properties for Biodegradable Poly(butylene adipate-co-terephthalate) Nanocomposites with MXene Nanosheets via Biaxial Stretching In order to ease the white pollution problem, biodegradable packaging materials are highly demanded. In this work, the biodegradable poly (butylene adipate-co-terephthalate)/MXene (PBAT/Ti3C2TX) composite casting films were fabricated by melt mixing. Then, the obtained PBAT/Ti3C2TX composite casting films were biaxially stretched at different stretching ratios so as to reduce the water vapor permeability rate (WVPR) and oxygen transmission rate (OTR). It was expected that the combination of Ti3C2TX nanosheets and biaxial stretching could improve the water vapor and oxygen barrier performance of PBAT films. The scanning electron microscope (SEM) observation showed that the Ti3C2TX nanosheets had good compatibility with the PBAT matrix. The presence of Ti3C2TX acted as a nucleating agent to promote the crystallinity when the content was lower than 2 wt%. The mechanical tests showed that the incorporation of 1.0 wt% Ti3C2TX improved the tensile stress, elongation at break, and Young’s modulus of the PBAT/Ti3C2TX nanocomposite simultaneously, as compared with those of pure PBAT. The mechanical dynamical tests showed that the presence of Ti3C2TX significantly improved the storage modulus of the PBAT nanocomposite in a glassy state. Compared with pure PBAT, PBAT-1.0 with 1.0 wt% Ti3C2TX exhibited the lowest OTR of 782 cc/m2·day and 10.2 g/m2·day. The enhancement in gas barrier properties can be attributed to the presence of Ti3C2TX nanosheets, which can increase the effective diffusion path length for gases. With the biaxial stretching, the OTR and WVPR of PBAT-1.0 were further reduced to 732 cc/m2·day and 6.5 g/m2·day, respectively. The PBAT composite films with enhanced water vapor and water barrier performance exhibit a potential application in green packaging. Introduction Green packaging materials are highly demanded in the recent years because of the everincreasing plastic pollution problem. The traditional packaging materials, such as polyethylene (PE) and poly(vinyl chloride) (PVC), have been gradually replaced by biodegradable polymers such as poly(lactic acid) (PLA), poly(butylene adipate-co-terephthalate) (PBAT), and poly(butylene succinate) (PBS) [1][2][3][4]. Compared with other biodegradable polyesters, PBAT has adjustable properties due to the copolymerization of 1,4-butanediol, adipic acid, and terephthalic acid [5]. In addition, it has good ductility, good thermal resistance, and high impact performance, which is similar to PE. However, it was reported that the oxygen transmission rate (OTR) of PBAT under ambient conditions was around 1050 cc/m 2 ·day, whereas the water vapor permeability rate (WVPR) was 3.3 × 10 −11 g·m/m 2 ·s·Pa, which made it difficult to meet the requirements for packaging applications [6][7][8]. The poor oxygen and water vapor barrier performances limit the broad applications of PBAT in packaging. Therefore, it is necessary to improve the oxygen and water vapor barrier performance of PBAT so as to prolong the shelf life and maintain good quality of food. It is widely accepted that the incorporation of nanofillers is a simple and effective method to reduce the OTR and WVPR of polymer films [9][10][11][12] because the presence of fillers can have barrier effects that increase the escape distance of oxygen and water molecules [13,14]. Li et al. reported that the well-aligned graphene nanosheets simultaneously reduced the oxygen permeability and enhanced the aging resistance of the PBAT composite film [15]. The oxygen-containing groups on graphene nanosheets enhanced the interactions between water molecules and altered the diffusion paths of water molecules. Mondal et al. found that the WVPR of PBAT could be reduced by 25% with the addition of 4 wt% organically modified montmorillonite (OMMT) [16]. This was because the impermeable OMMT in the PBAT matrix increased the tortuosity of the path for water molecules. Li et al. mixed graphene nanosheets with PBAT by the solution casting method [15]. The presence of graphene resulted in an 80% reduction in water permeation and a 99% reduction in oxygen transmission of PBAT nanocomposite films, which was ascribed to the fact that the graphene nanosheets enlarged the effective diffusion path length of water and oxygen across the films. MXene is a novel family of (2D) transition metal carbides and/or nitrides [17][18][19][20][21]. The abundance of functional groups on the surface of MXene, such as oxygen (=O), hydroxyl (-OH), or fluorine (-F), endows it with good compatibility with many polar polymer matrices [22]. MXene has attracted considerable research interest for various applications, such as energy storage, sensors, electromagnetic shielding, and so on [23][24][25][26]. Ti 3 C 2 T X nanosheets have high stiffness and strength, which can serve as effective, reinforced fillers to improve the mechanical properties of polymer/Ti 3 C 2 T X nanocomposites. In addition, Wu et al. demonstrated that a small amount of Ti 3 C 2 T X improved the complex viscosity and storage modulus of PVDF nanocomposites significantly [22]. The ultrahigh molecular weight polyethylene (UMWPE) composites containing 0.75 wt% Ti 3 C 2 T X had the best creep performance [27]. With the addition of 1.9 vol% of MXene, the Ti 3 C 2 T X /polystyrene nanocomposites exhibited a 54% higher storage modulus than that of neat polystyrene [28]. Yu et al. demonstrated that the addition of MXene improved the thermal stability and fire safety of polystyrene [29]. However, to the best of our knowledge, the PBAT/Ti 3 C 2 T X nanocomposites have not been reported on yet. It is expected that the impermeable Ti 3 C 2 T X nanosheets in the PBAT matrix via biaxial stretching can not only improve the gas barrier performance, but also enhance the thermal stability and stiffness of the nanocomposites. Biaxial stretching processing is the process of stretching hot polymeric films in the cross-machine direction. It is also reported that the biaxial stretching process endows the polymer matrix with an ordered structure and improved gas barrier properties [30][31][32]. This is because the biaxial stretching can help to reduce the free volume of the amorphous region of the polymers, resulting in an enhancement in gas barrier performance [33]. In addition, the biaxial stretching can help the 2D filler in the polymer matrices form an orientation structure, which can benefit for the enhancement of the gas barrier performance [34]. Li et al. reported that the orientated OMMT in the PBAT matrix prepared by biaxial stretching significantly reduced the WVPR and caused a 99% reduction in OTR with an enhancement in elongation at break [15]. Yoksan et al. demonstrated that the stretched PLA/PBAT/thermoplastic starch composite films had stacked-layer planar morphology, which contributed to the improvement in crystallinity, impact strength, water vapor, and oxygen barrier properties [35]. Preparation of PBAT/Ti 3 C 2 T X Nanocomposite Biaxial Stretching Films The preparation diagram of PBAT/Ti 3 C 2 T X nanocomposite biaxial stretching films is shown in Scheme 1. To achieve a better dispersion of Ti 3 C 2 T X nanosheets in the PBAT matrix, Ti 3 C 2 T X was first mixed with PBAT pellets by melt compounding via a twin extruder to obtain composite casting films. Then the PBAT/Ti 3 C 2 T X nanocomposite casting films were biaxially stretched. The effects of Ti 3 C 2 T X content on the morphology, thermal stability, and crystallization behavior, as well as mechanical properties of PBAT/Ti 3 C 2 T X nanocomposites, were comprehensively evaluated. The PBAT nanocomposite films containing the optimized 1 wt% Ti 3 C 2 T X were biaxially stretched under different parameters. Scheme 1. Schematic diagram of preparation of PBAT/Ti 3 C 2 T X nanocomposite biaxial stretching films. The PBAT/Ti 3 C 2 T X nanocomposite biaxial stretching films were prepared by two steps. Prior to the experiment, the PBAT pellets were dried in a vacuum oven at 80 • C for 12 h. Firstly, the PBAT/Ti 3 C 2 T X nanocomposite casting film was prepared by extrusion casting (FDHU-35, Potpo, Guangzhou, China) so that the Ti 3 C 2 T X could have a better dispersion in the PBAT matrix. The temperatures from the hopper to the nozzle were set at 130-150-170-170-170-150 • C. The screw speed was set at 60 rpm. The thickness of the obtained PBAT/Ti 3 C 2 T X nanocomposite casting films was approximately 100 µm. The code of the samples was abbreviated as PBAT-X, where X represents the weight ratio of Ti 3 C 2 T X in the PBAT/Ti 3 C 2 T X nanocomposites. The extruded PBAT nanocomposite casting films containing 1 wt% Ti 3 C 2 T X nanosheets were cut into squares (80 mm side length) for biaxial stretching. The biaxially oriented films were then prepared at different stretching ratios on a KARO 5.0 (Brückner Maschi-nenbauGmbH & Co., Siegsdorf, Germany) equipped with mechanically driven clamps. The stretched films were thermally annealed at a temperature of 110 • C for 60 s. Finally, the biaxially oriented nanocomposite films were obtained to characterize the crystal orientation and the gas barrier properties. Characterization The fracture surfaces of the PBAT/Ti 3 C 2 T X nanocomposite were characterized by scanning electron microscopy (SEM, Quanta 250, FEI, Hillsboro, OR, USA). The samples were fractured in liquid nitrogen for 30 min. They were observed at an accelerating voltage of 5 kV. Prior to the observation, all the samples were coated with a layer of gold. Thermogravimetric analysis (TGA) of all PBAT/Ti 3 C 2 T X nanocomposites was conducted on a TG-209F1 thermal analyzer (Netzsch, Selb, Germany) to measure the thermal stability under air atmosphere. The samples of about 10 mg were heated from room temperature to 600 • C at a heating rate of 10 • C/min. The crystallization and melting behaviors of PBAT/Ti 3 C 2 T X nanocomposites were conducted on a DSC-204F1 (Netzsch, Selb, Germany). The samples of approximately 5 mg were first heated to 180 • C at a heating rate of 10 • C/min to establish the thermal history, then cooled to 20 • C at a cooling rate of 10 • C/min and followed by a second heating rate of 10 • C/min to 180 • C. The peak crystallization temperature (T cp ), the peak melting temperature (T mp ), the crystallization enthalpy (∆H c ), and the melting enthalpy (∆H m ) of these samples were summarized. The degree of crystallinity of PBAT (χ c ) was calculated by the following equation: where ∆H 0 is the 100% melting enthalpy of PBAT (114 J/g) [36], and ϕ c presents the weight ratio of Ti 3 C 2 T X in the nanocomposites. The tensile test was performed on an Instron 5566 universal electron tensile machine. The specimens were cut into rectangle shape with a dimension of 1 cm × 8 cm × 100 µm. The tensile speed was fixed at 10 mm/min. The reported results were the average values of at least five successful specimens. The dynamical mechanical analysis was conducted on a Netzsch DMA 242E (Netzsch, Selb, Germany) analyzer. Tensile measurements were taken on specimens with dimensions of 30 mm at a fixed frequency of 1 Hz and from 90 • C to 70 • C at a ramping rate of 3 • C/min. The 2D, wide-angle, X-ray scattering (2D-WAXS) measurements were carried out on an X-ray diffractometer (Rigaku UltimaIV, The Woodlands, TX, USA). The data were collected in the scanning range of 10-60 • with a step of 0.02 • . The Figure 1 shows the SEM images of the cross-section fracture surfaces of PBAT/Ti 3 C 2 T X nanocomposite films. In Figure 1a,b, PBAT-0 exhibits a relatively smooth surface due to its brittle fracture after immersion in liquid nitrogen [37]. With the addition of Ti 3 C 2 T X nanosheets, the surfaces of the PBAT/Ti 3 C 2 T X nanocomposites become gradually rough in Figure 1c-f when the Ti 3 C 2 T X content is lower than 2.0 wt%. This is due to the presence of Ti 3 C 2 T X , which served as a rigid filler to transfer the stress during facture. In addition, it can be seen that an agglomeration phenomenon existed on the surface of PBAT-2.0 (Figure 1g,h). In Figure 1, no pores and holes are observable between the exposed Ti 3 C 2 T X nanosheets on the surfaces (Figure 1f,h) and the PBAT matrix, indicating that Ti 3 C 2 T X nanosheets have good compatibility with the PBAT matrix. It is attributed to the large number of polar groups of Ti 3 C 2 T X that can react with the polyester groups of PBAT. Thermal Stability The thermal stability of PBAT/Ti 3 C 2 T X nanocomposites is shown in Figure 2, and the corresponding data, including the temperatures at 10% weight loss (T 10 ), the temperatures at the maximum weight loss rate (T max ), and the char yields at 600 • C, are listed in Table 1. In Figure 2a, two thermal decomposition stages are observable for all the samples. The first stage between 300 • C and 420 • C can be attributed to the random, main-chain scission and thermo-oxidative reactions of PBAT [38]. The second stage, in the range of 420-550 • C, corresponds to cis-elimination and thermo-oxidative reactions [32]. In Table 1, it can be seen that the values of T 10 showed an increasing trend with the increase of Ti 3 C 2 T X content. When compared to PBAT-0, the T 10 of PBAT-2.0 dropped from 373.5 • C to 379.2 • C. In addition, the T p of PBAT-2.0 gradually increased to 412.5 • C with the addition of 2 wt% Ti 3 C 2 T X . That is because the presence of Ti 3 C 2 T X rapidly catalyzed the formation of a char layer that served as a thermal barrier to protect the underlying polymer matrices [39]. The improvement of char yield benefited from the isolating of volatile gases and oxygen; therefore, improving the thermal stability of PBAT. Furthermore, the char yield at 600 • C for PBAT-0, PBAT-0.5, PBAT-1.0, and PBAT-2.0 was 0.7%, 1.0%, 1.4%, and 1.7%, respectively. It was mainly due to the introduction of Ti 3 C 2 T X, which promoted charring, and partial polymers could not be completely thermally decomposed, resulting in enhanced char residues [40]. Crystallization and Melting Behavior The DSC curves of the PBAT/Ti 3 C 2 T X nanocomposites are shown in Figure 3. In Figure 3a, the onset crystallization temperatures of PBAT nanocomposites exhibit an increasing trend with the increase of Ti 3 C 2 T X content. In addition, the values of T cp for PBAT nanocomposites in Table 2 are 72.1, 73.1, 73.7, and 75.2 • C, respectively. The increase in T cp indicated that the presence of Ti 3 C 2 T X had a heterogeneous nucleation effect, accelerating the formation of crystallites in the PBAT matrix during cooling [41]. It was noted that the values of ∆H m were lower than ∆H c for the PBAT/Ti 3 C 2 T X composites, which can be ascribed to the fast cooling rate. In Figure 3b, the values of T mp for PBAT nanocomposites are 119.3, 121.0, 121.6, and 121.0 • C, respectively. The increase in T mp suggests that the filling Ti 3 C 2 T X contributes to the formation of perfect crystallinity of PBAT during the cooling procedure. Moreover, the crystallization degree of PBAT-1.0 had the highest value of 13.4% with the addition of 1.0 wt% Ti 3 C 2 T X . When the content of Ti 3 C 2 T X was further increased up to 2.0 wt%, the crystallization degree of PBAT-2.0 showed a slight decrease. This may be due to the excessive addition of Ti 3 C 2 T X , which led to agglomeration, to some extent. On the other hand, the inhibition effects of the excessive Ti 3 C 2 T X nanosheets were more profound than the nucleating effect that led to smaller crystallization and decreased crystallinity [42]. Figure 4 shows the typical stress-strain curves for pure PBAT and PBAT/Ti 3 C 2 T X nanocomposite casting films, and the corresponding data are summarized in Table 3. It was observed that PBAT-0 exhibited a high ductility (elongation at break~1442%) but low tensile strength (~22.6 MPa), which is consistent with previous report [14]. With the addition of 0.5 wt% Ti 3 C 2 T X , the tensile strength of the PBAT/Ti 3 C 2 T X nanocomposite increased by 19.8% with a slight increase in the elongation at break. As depicted in Figure 1, the Ti 3 C 2 T X had good interfacial interaction with the PBAT matrix; therefore, the addition of Ti 3 C 2 T X nanosheets can transform the stress during the tensile process. When the Ti 3 C 2 T X content was increased to 1.0%, the PBAT/Ti 3 C 2 T X nanocomposite had the maximum tensile strength (31.6 MPa). This enhancement can be ascribed to the reinforcement effects of the nanofillers and the interaction between the stress concentration zones around the Ti 3 C 2 T X nanosheets [43,44]. It is worth noting that PBAT-2.0 showed a decreasing tendency in both tensile stress and elongation at break as compared with PBAT-1.0. This may be due to the aggregation of Ti 3 C 2 T X nanosheets in the PBAT matrix. Dynamic Mechanical Analysis The storage modulus as a function of temperature for PBAT/Ti 3 C 2 T X nanocomposite casting films is shown in Figure 5a. It is clear that the storage modulus of the PBAT nanocomposite reinforced with Ti 3 C 2 T X was higher than that of PBAT-0 in the glassy states. In addition, the reinforcement effect was more obvious with the increase of Ti 3 C 2 T X content. When compared to PBAT-0, the storage modulus of PBAT-2.0 at 80 • C increased from 1220 MPa to 2342 MPa. This is due to the stiffening effect of rigid Ti 3 C 2 T X nanosheets. Aside from this, the polar groups on the surface of Ti 3 C 2 T X may have had intramolecular interaction with the PBAT. matrix, which may have also improved the storage modulus of the PBAT/Ti 3 C 2 T X nanocomposites [43]. The loss factor peak (tanδ) is usually defined as the glass transition temperature (T g ). It is observable from Figure 5b that the T g shifted to a lower temperature when the PBAT matrix was incorporated with Ti 3 C 2 T X . With the addition of 2 wt% Ti 3 C 2 T X , the PBAT-2.0 shifted from −11.9 • C to −15.0 • C, as compared with that of PBAT-0. It can be attributed to the incorporation of Ti 3 C 2 T X , which can improve the chain mobility of the amorphous regions of PBAT due to the liberation effect of Ti 3 C 2 T X . In addition, the height of tanδ also showed a slight increase, indicating that an increase in Ti 3 C 2 T X content will result in higher dissipative energy [45]. Figure 6a, and these crystal planes belong to the PBAT phase [10]. With the increase of the stretching ratio in the machine direction (MD), the crystal planes (111), (100), (110), and (010) in the PBAT composite films (Figure 6b,c) had more obvious orientation. In addition, the larger the stretching ratio, the more obvious the orientation effect, which indicates that uniaxial stretching can promote the orientation of the PBAT/Ti 3 C 2 T X biaxial stretching films' crystal form along the MD. In Figure 6d,e, there is no obvious crystal orientation in the 2D-WAXS diffraction pattern in the biaxially stretched PBAT/Ti 3 C 2 T X films, indicating that the biaxial stretching will not cause the film to have an obvious crystal orientation in a certain direction. The crystal orientation of PBAT/Ti 3 C 2 T X composite films further confirms that the biaxially oriented PBAT/Ti 3 C 2 T X film has excellent isotropy. Gas Barrier Properties of Biaxial Stretching Films The gas barrier properties of PBAT/Ti 3 C 2 T X nanocomposite casting films are shown in Figure 7. In Figure 7a, the OTR of PBAT nanocomposite casting films shows a decreasing trend with the increase of Ti 3 C 2 T X content. The lowest OTR was achieved for PBAT-1.0, which decreased from 1030 to 782 cc/m 2 ·day. Similarly, the water vapor transmission rate (WVTR) of PBAT/Ti 3 C 2 T X nanocomposite casting films decreased as the Ti 3 C 2 T X content increased in the PBAT matrix. In Figure 7b, the WVTR for PBAT-0, PBAT-0.5, PBAT-1.0, and PBAT-2.0 is determined to be 14.3, 12.7, 10.2, and 11.7 g/m 2 ·day, respectively. It is speculated that the addition of Ti 3 C 2 T X nanosheets can serve as a barrier to form a tortuous path, increasing the effective diffusion path length. Furthermore, the abundant hydroxyl groups on the surface of Ti 3 C 2 T X will contribute to the interactions with water molecules, delaying the diffusion to some extent. However, the aggregation of Ti 3 C 2 T X will result in a deterioration of the gas barrier performance when the content of Ti 3 C 2 T X is increased by up to 2.0 wt%. To investigate the effects of the stretching ratio on the gas barrier performance of PBAT/Ti 3 C 2 T X nanocomposite stretching films, the OTR and WVTR data of PBAT-1.0 stretching film under different stretching ratios are shown in Figure 8. In Figure 8a, it is clear that the OTR of PBAT-1.0 stretching film decreased from 782 to 732 cc/m 2 ·day with the stretching ratio increasing to 3 under uniaxial stretching. This can be attributed to the enhanced orientation of PBAT crystallites formed during the uniaxial stretching process, which is demonstrated in 2D-WAXS patterns ( Figure 6). Meanwhile, the WVTR for PBAT-1.0 stretching film achieved the lowest value of 6.5 g/m 2 ·day under 2 × 2 biaxial stretching condition, shown in Figure 8b. This is because the biaxial stretching process can contribute to the formation of an amorphous phase of PBAT and the exfoliation of Ti 3 C 2 T X sheets. However, the barrier effect of Ti 3 C 2 T X sheets is more profound than the effect of the PBAT amorphous phase, resulting in a further decrease in OTR and WVTR. The combination of two-dimensional, inorganic nanofillers with the biaxial stretching process paves the way for the preparation of a biodegradable polymer with enhanced gas barrier performance. Conclusions In this work, two-dimensional MXene (Ti 3 C 2 T X ) nanosheets were mixed with PBAT by melt compounding. The effects of Ti 3 C 2 T X content on the morphology, thermal stability, crystallization behavior, and gas barrier performance of PBAT were investigated. Furthermore, the effects of the biaxial stretching ratio on the gas barrier properties were further discussed. The TGA results showed that the addition of Ti 3 C 2 T X improved the thermal stability of the PBAT nanocomposite. In addition, the tensile tests showed that the addition of 1.0 wt% Ti 3 C 2 T X improved the maximum tensile stress without losing ductility. The storage modulus of PBAT was significantly improved in the glassy state with the addition of Ti 3 C 2 T X . After biaxial stretching, the PBAT-1.0 film (1 × 3) exhibited an oxygen transmission rate of 732 cc/m 2 ·day, which was 28.9% lower than that of pure PBAT casting film. When the stretching ratio was 2 × 2, the WVTR of PBAT-1.0 biaxial stretching film was 6.5 g/m 2 ·day, which was 36.3% lower than that of 1 × 1 PBAT-1.0 film. The enhancement in gas barrier properties can be attributed to the presence of Ti 3 C 2 T X nanosheets, which can increase the effective diffusion path length for gases. The results of this work indicate the need for further studies on the influence of the orientation and surface functionalization of Ti 3 C 2 T X nanosheets, as well as the incorporation of compatbilizers in the PBAT composite films for packaging applications. Author Contributions: Investigation, X.W., X.L., L.C. and S.F.; writing-original draft preparation, X.W. and X.L.; methodology, X.W. and X.L.; writing-review and editing, Y.L. and S.F.; supervision, Y.L. and S.F. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the Natural Science Foundation of Hunan Province (no. 2019JJ50132) and the Innovation Platform Open Fund of Hunan Province (no. 18K079). Institutional Review Board Statement: This study did not involve humans or animals. Informed Consent Statement: This study did not involve patients. Data Availability Statement: The raw/processed data required to reproduce these findings cannot be shared at this time as the data also form part of an ongoing study. Conflicts of Interest: The authors declare no conflict of interest.
2022-01-28T16:05:56.930Z
2022-01-25T00:00:00.000
{ "year": 2022, "sha1": "a3fc9009d7290ee68148439ead2202a00fbc31d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/3/480/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "da0f5c27654d868968005487c030ccdf1d82f99a", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
257590525
pes2o/s2orc
v3-fos-license
Improvement of Time Forecasting Models Using Machine Learning for Future Pandemic Applications Based on COVID-19 Data 2020–2022 Improving forecasts, particularly the accuracy, efficiency, and precision of time-series forecasts, is becoming critical for authorities to predict, monitor, and prevent the spread of the Coronavirus disease. However, the results obtained from the predictive models are imprecise and inefficient because the dataset contains linear and non-linear patterns, respectively. Linear models such as autoregressive integrated moving average cannot be used effectively to predict complex time series, so nonlinear approaches are better suited for such a purpose. Therefore, to achieve a more accurate and efficient predictive value of COVID-19 that is closer to the true value of COVID-19, a hybrid approach was implemented. Therefore, the objectives of this study are twofold. The first objective is to propose intelligence-based prediction methods to achieve better prediction results called autoregressive integrated moving average–least-squares support vector machine. The second objective is to investigate the performance of these proposed models by comparing them with the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average–support vector machine. Our investigation is based on three COVID-19 real datasets, i.e., daily new cases data, daily new death cases data, and daily new recovered cases data. Then, statistical measures such as mean square error, root mean square error, mean absolute error, and mean absolute percentage error were performed to verify that the proposed models are better than the autoregressive integrated moving average, support vector machine model, least-squares support vector machine, and autoregressive integrated moving average–support vector machine. Empirical results using three recent datasets of known the Coronavirus Disease-19 cases in Malaysia show that the proposed model generates the smallest mean square error, root mean square error, mean absolute error, and mean absolute percentage error values for training and testing datasets compared to the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average–support vector machine models. This means that the predicted value of the proposed model is closer to the true value. These results demonstrate that the proposed model can generate estimates more accurately and efficiently. Compared to the autoregressive integrated moving average, support vector machine, least-squares support vector machine, and autoregressive integrated moving average–support vector machine models, our proposed models perform much better in terms of percent error reduction for both training and testing all datasets. Therefore, the proposed model is possibly the most efficient and effective way to improve prediction for future pandemic performance with a higher level of accuracy and efficiency. ARIMA (p, d, q) model [5][6][7]. Predicting new daily cases of COVID-19 was a difficult task, as cases increased daily. In the first wave, the pattern of COVID-19 cases has been continuously increasing for a period and then decreasing. However, for the second wave, it appears to be picking up again, and some of the COVID-19 cases are difficult to predict. In this scenario, some researchers predict the pattern of COVID-19 using ARIMA [8][9][10][11][12][13][14]. However, the ARIMA model has a limitation in that it typically can only handle a linear time-series data structure [15]. ARIMA model approximations are insufficient to pose a timeseries prediction obstacle for researchers, especially for nonlinear patterns [16]. Despite its superior performance, the classification performance of Support Vector Machines (SVMs) and the generalizability of the classifier are often affected by the dimension or number of feature variables used, as mentioned by Lee [17]. As a result of the development of Vector Machines models, this process will be able to provide the most accurate and efficient result in each prediction case. SVMs, first introduced in 1995 by Vladimir Vapnik [18] in the field of statistical learning theory and structural risk minimization, have proven useful in a variety of prediction problems and classifications. SVMs could also manage or address difficulties such as non-linearity, local minimum, and high dimension where the ARIMA model could not [15,[19][20][21]. SVM models have recently been used to handle problems such as nonlinear, local minimum, and high dimension. SVM can even guarantee higher accuracy for long-term predictions compared to other computational approaches in many practical applications. However, the single SVM model as a single ARIMA model also has some limitations, as the SVM model can only handle non-linear data and not linear data. With the limitations of a single ARIMA and SVM model, as well as an in-depth analysis of time-series prediction, hybrid approaches have become the best approach to overcome both limitations, and have a very significant impact in many areas due to their dynamic nature and higher level of predicting accuracy, efficiency, and precision. This approach is crucial because of the problems encountered in time-series forecasting, where almost all real time series contain linear and nonlinear correlation patterns between the data. Recently, the hybridization of prediction methods has been used with great success to achieve higher prediction accuracy [15,16,19,20,[22][23][24][25][26]. Regarding the spread of COVID-19, the hybrid time-series model approach is crucial for predicting the impact of the COVID-19 outbreak, and has proven successful in predicting COVID-19 [27][28][29][30][31][32][33]. This study aims to (a) propose the ARIMA-LSSVM hybrid model approach to achieve better forecast results when it is able to produce the best estimator, i.e., produce small error terms; additionally, it aims to (b) examine the performance of the proposed models by comparing them to ARIMA and SVM models using three daily cases of COVID-19 data in Malaysia, that is, daily new positive cases, daily new deaths, and daily new recovered cases. Despite recent advances in time series and on COVID-19, the modelling process does not include COVID-19 cases specifically in Malaysia to help authorities manage the spread of this outbreak by producing more efficient, more accurate data, and more accurate forecasting results. This study makes a significant contribution to the field of pandemic prediction and prevention by introducing novel approaches to dealing with COVID-19 data. Rather than relying on traditional methods, this research utilizes evidence-based prediction techniques, which have been shown to be more accurate and efficient. The use of these intelligent forecasting models enables local health authorities to create more precise and effective preventive measures, especially in the face of future outbreaks. This study is particularly innovative in its use of hybrid forecasting models by machine learning for Malaysia's future pandemics, such as avian flu or novel coronavirus strains. According to Moore [34], the scenario is for the next possible new pandemic of avian influenza virus strain H7N9 or a novel coronavirus. The predictive models developed are more precise, accurate, and efficient in anticipating the dynamic spread of the virus. This approach has been tested on real-world data, including daily new cases, daily new death cases, and daily new recovered cases of COVID-19, making it a valuable tool for public health officials and researchers. This research also has significant implications for Diagnostics 2023, 13, 1121 4 of 32 future outbreaks, particularly in countries with tropical rainforests such as Malaysia. By predicting the spread of COVID-19 early on, this model can help policymakers build better healthcare facilities, take legislative action, and avoid economic losses. While a vaccine is now available, this model remains useful in accurately forecasting and preventing the impact of future pandemics, including those caused by new virus strains. This study's innovative and evidence-based methods make a valuable contribution to pandemic prediction and prevention, providing significant insights that can be used to mitigate the impact of future outbreaks. The implications of this research extend to public health authorities, policymakers, and researchers worldwide, offering powerful tools for mitigating the devastating effects of pandemics. The remainder of this paper is structured as follows. Materials and Methods goes into detail about the method we used to develop our proposed model. The hybrid ARIMA-SVM model used in this study is then briefly described. The results and discussion present the performance of our proposed model based on three known COVID-19 case datasets. Finally, we wrap up the article and make suggestions for future research. ARIMA Modelling The ARIMA (p, d, q) autoregressive integrated moving average model is one of the families in time-series forecasting that is widely used for time-series forecasting series datasets due to its flexibility with different time categories [16]. It also explicitly considers several standard patterns in time-series analysis, allowing for a powerful and easy-to-use way to produce accurate time-series forecasts. However, limitations may occur due to the existence of assumptions of a linear form that represents a linear relationship between the future value of the time series with the current value, the past value, and random noise in the model [15][16][17]21,26]. In the ARIMA model, p and q are the numbers of the autoregressive and moving average terms, and they are always listed in the order of the model, while d is the integer representing the differential order. The ARIMA model type with mean µ is represented mathematically as follows: where y t and e t are the actual value and the random error at time t, respectively. Both are assumed to be independently and identically distributed (iid) with a mean 0 and a constant variance of σ 2 ; θ i (i = 1, 2, . . . , q) and ∅ j (j = 0, 1, 2, . . . , q) are the model parameters that need to be predicted. Support Vector Machines Modelling The Support Vector Machine (SVM) introduced by Vladimir Vapnik [18], which incorporates statistical learning theory, can handle larger dimensional data better, even with a small number of instances generalizability. Because the models select boundary support vectors from the input data, they process the data quickly. The SVM regression function is written as follows. For linear and regressive dataset {x i , y i } the function is formulated as follows: The coefficient w and b are estimated by minimizing. where ε is called the ε-intensive loss function and is formulated as follows: Equation (3) can be transformed to the following constrained formulation by introducing positive slack variables ξ and ξ * i : We always use dual theory to convert the above formula into a convex quadratic programming problem when solving it. Adding the Lagrange Equation (5) results in the following term: Subject to When a dataset cannot be regressed linearly, we map it to a high dimension feature space and regress it linearly. The following is the formulation: Subject to is the inner product of feature space and is called kernel function. Any symmetric function that satisfies Mercer condition can be used as Kernel function [19]. The Gaussian kernel function is specified in this study. SVMs were used to estimate the nonlinear behaviour of the forecasting dataset because Gaussian kernels perform well under general smoothness assumptions [22]. Least-Square Support Vector Machines Modelling The Least-Squares Support Vector Machines (LSSVM) proposed by Suykens and Vandewalle [35] is a modification of the standard SVM. LSSVM formulates the training process by solving linear problem quicker than SVM through quadratic programming. Additionally, this model is also more time efficient when analysing huge data. Consider a given training set (x i , y j ), i = 1, 2, . . . n with x i ∈ R n as input data and y i ∈ R as output data. LSSVM defines the regression function as: Diagnostics 2023, 13, 1121 6 of 32 Subject to where ω is the weight vector; y is the regularization parameter where it determines the trade-off between the training error minimization and smoothness of the estimated function; e is the approximation error; ϕ(.) is the nonlinear function; and b is the bias term. Constrained optimization of Equation (9) can be translated to unconstrained optimization by constructing Lagrange function. This can be obtained by using Karush-Kuhn-Tucker (KKT) condition, where it partially differentiates with respect to ω, b, e, and ϕ(.): is the Radial Basis Function (RBF) kernel function that obtains a and b by calculating linear operations. Proposed Hybrid Model Despite the various time-series models presented, the accuracy, efficiency, and precision of time-series forecasts are becoming crucial for many decision-making processes today. However, these factors do not appear in ARIMA and SVM models. This is also the main reason why the time-series forecasting model is crucial, more demanding, and dynamic, as well as actively researched in many fields of study. ARIMA and SVM models have also prevailed in their linear or nonlinear domains [15,25,26]. However, none of these are generic principles that can be generalized to all situations. Therefore, a hybrid approach using both linear and non-linear modelling capabilities is recommended. This approach is mainly proposed to improve the overall prediction effectiveness. Therefore, there is no research on how to improve the effectiveness of predictive models created in Malaysia, particularly in the case of COVID-19. There are two reasons for using hybrid models in this study. First, a single ARIMA and SVM model may not be sufficient to identify all the time series' characteristics. The second assumption is that one or both cannot recognise the actual data generation process. This study's hybrid models were built in two stages. Part I discusses linear autocorrelation composition, and Part II discusses nonlinear components. Thus, where t and N t are denoted as the linear composition and the nonlinear component, respectively. Based on the data, these two parts must be approximated. Part I focuses on linear modelling, which employs the ARIMA model to model the linear composition. The model from the first model included residuals, which are nonlinear interactions that cannot be modelled by a linear model or possibly a linear relationship. Thus, Let ε i denoted as the residual from the linear model at time t. Then, whereˆ t is the predicted value for time t from the estimated relationship in (1), with ε t as the residual at time t from the linear model. The residual dataset after ARIMA fitting will only contain non-linear relationships that can be represented by a linear model [15]. The first stage results, which include forecast values and residuals from linear modelling, are then used in Part II. Following Part II, the emphasis is on nonlinear modelling, where LSSVM is used to predict the nonlinear connection that occurs in residuals of linear modelling and original data. Then, the residual can be calculated using LSSVM by modelling various configurations as follows: Part II focuses on nonlinear modelling, and LSSVM is used to model the nonlinear (possibly linear) relationship that occurs in residuals of linear modelling as well as original data. The residual can then be calculated using LSSVM by modelling different configurations as follows: where f is a nonlinear function determined by the LSSVMs model and e t is the random errors. Thus, the hybrid forecast isŷ Equations (12) and (13) can be identified asN t , therefore the forecasted values can be achieved by summation of linear and nonlinear components. Figure 1 shows the functional flowchart of hybrid models. In short, the proposed hybrid process methodology is divided into two parts. The ARIMA model is used to analyse the linear composition problem in Part I. Part II develops an LSSVM model to model the residuals from Part I. Because the ARIMA model in Part I cannot handle the nonlinear component of the data, the residuals of the linear model will include information about the nonlinearity. The LSSVM results can be utilised as forecasts of the ARIMA model's error terms. The hybrid model defines various patterns by combining the distinct features and strengths of the ARIMA and LSSVM models. As a result, it is more effective to model linear and non-linear patterns separately with two different models and then re-hybridize the forecast results to improve overall modelling and forecasting performance. Proposed Algorithm Step 1: Three selected time series of COVID-19 cases datasets (1 October 2020-4 November 2022), namely daily new positive cases, daily new deaths cases, and daily new recovered cases, are generated in R programming Language. Step 2: Each of the generated datasets is defined as . , x 3n } for daily new positive cases, daily new deaths cases, and daily new recovered cases, respectively. Then, the best ARIMA (p, d, q) is selected after checking the autocorrelation function (ACF) plot of ARIMA (p, d, q) residuals. The best fitted value for daily new positive cases is ARIMA (2, 1, 2), while it is ARIMA (1, 1, 2) and ARIMA (0, 1, 1) for daily new fatalities cases and daily new recovered cases of COVID-19, respectively. Step 3: The fitted value, Step 4: Combine the values in step 3 as a set of input variables to obtain the output y t Step 5: The ARIMA (p, d, q) is defined by the order of q. According to the information in step 4, Vector Machines is carried out to examine the residuals to obtain the output L t using R-programming Language. Step 6: A fitted value of ARIMA with the hybridization of Vector Machines model is obtained for all sample data. Then, the residuals ε t is generated to obtain the forecasting result N t . Step 7: The framing data split randomly into training data and testing data for further Vector Machines modelling. Run the Vector Machines procedure using the "e1071" and "liquidSVM" package in R-Programming Language. Step 8: The two modifiable parameters of the LSSVM technique (γ and σ) derived by objective function minimization such as mean square error (MSE). The grid-search method updates the parameters exponentially in the specified range using predetermined equidistant steps. Step 9: Assume the split data as the processing data and the order q as in Step 5. Therefore, the combine forecast as in Equation (16):ŷ t =ˆ t +N t Step 10: Estimate the model performance using the statistical measurement which are MSE, RMSE, MAE, and MAPE. Forecasting Evaluation Criteria In order to assess the overall performance of the proposed hybrid models, the one of a kind statistical measurements standard which accompanied by [15,16,36] including MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), MSE (Mean Squared Error), and RMSE (Root Mean Squared Error) are used. In time-series analysis, measurement tools such as Akaike's information criterion (AIC) and the Bayesian information criterion (BIC) are commonly used to determine the appropriate length for distributed lag for the ARIMA model. As a result, model selection is based on the model with the lowest AIC and BIC values to provide measures of model performance, resulting in the selection of the best ARIMA model. Meanwhile, three parameters such as C are used as measurement tools to determine the best fitted model for LSSVMs models. Meanwhile, for the LSSVMs models, two parameters such as γ and σ are used as the measurement tools to determine the best fitted model. Incorrect LSSVM model parameter selection can lead to over or underfitting of the training data. The parameter sets of the LSSVMs model with the lowest MSE value, as with the ARIMA model, will be selected for use in the best fitting model. As a result, for the hybrid models, the ARIMA first functioned as a pre-processor, filtering the linear pattern of datasets. The ARIMA model's error term is then fed into the SVM in the hybrid models. LSSVMs were used to reduce the ARIMA error function. Application of the Hybrid Model of COVID-19 in Malaysia This section examined the proposed model's performance in two ways: first, the performance of the proposed models compared to ARIMA, SVM, LSSVM, ARIMA-SVM, and ARIMA-LSSVM models; second, the percentage improvement of the proposed models compared to ARIMA and SVM models. Since the World Health Organization (WHO) declared COVID-19 to be a worldwide pandemic, the COVID-19 time-series datasets have been extensively studied. The predictive capability of the developed novel models was then compared using three well-known datasets of daily COVID-19 cases in Malaysia-daily new positive cases data, daily new fatalities cases data, and daily new recovered cases datato demonstrate the performance of the proposed model in terms of accuracy, effectively, and accurately. All these data are reported from the 1 October 2020 to 4 November 2022 and retrieved from the COVIDNOW website at https://covidnow.moh.gov.my/, accessed on 10 January 2023. The minimum value of new death, new cases, and new recovered cases in Table 1 is 0, 2600, and 1.8, respectively, while the maximum value of new cases, death, and recovered cases is 33,872.0, 592, and 33,406, respectively. Similarly, the mean and median for new cases, deaths, and recovered cases are 6322.7, 47.51, and 6415.5, respectively, where the parentheses indicate the median (3471, 11, 3447.0). The first quartile values for daily new cases, death cases, and recover cases are 1922, 4, and 1843, respectively. The number of daily new cases, deaths, and recoveries in the third quartile is 6824, 58, and 6775, respectively. Furthermore, the standard deviations for new cases, deaths, and recoveries are 7097.8, 81.12, and 7058.3 percentiles, respectively. Furthermore, this section discusses the process of proposed models for both parts, i.e., Part I (Linear Modelling) and Part II (Nonlinear Modelling), using three well-known COVID-19 datasets, namely daily new positive cases, daily new deaths cases, and daily new recovered cases, to demonstrate the effectiveness of the proposed models. Both linear and nonlinear modelling, as well as the data used in this study, are carried out using R programming. Part I (Linear Modelling)-ARIMA is used to generate the best ARIMA model for the daily new positive case dataset (2,1,2). ARIMA is the best fitting ARIMA model for the daily new death case dataset (1, 1, 2). Meanwhile, the best ARIMA model is reported as ARIMA in the case of the daily new recovered cases dataset (0, 1, 1). Table 2 summarizes the results of this ARIMA (p, d, q) model. Table 3 displays the estimates for all parameters. The p-values for all parameters are small, as shown in this table. As a result, for confirmed, recovered, and death cases, the models were statistically significant and could be used to forecast the future [37,38]. Part II (Nonlinear Modelling)-Based on the concepts of support vector machine design and the use of pruning algorithms in R-programming software, an optimal machine learning algorithm was created. For the daily new positive COVID-19 cases datasets, parameters γ = 264, σ = 0.008 show the smallest values of MSE i.e., 6,661,412 (see Table 4). Therefore, this parameters value was selected for use in the best-fitting model for the datasets of daily new positive COVID-19 cases. Whereas the smallest value of MSE is 250.887 and 21114252 (Table 4), with parameters γ = 877, σ = 0.006 and γ = 334, σ = 0.008 are selected as the best-fitting model for daily new death and daily new recovered cases of COVID-19, respectively. The daily new positive cases datasets series contains 765 data points and is recoded from 1 October 2020 to 4 November 2022 (see Figure 2). The number of daily new positive COVID-19 cases in Malaysia has increased significantly twice since July 2021, but has now dropped below 5000 new cases. However, it has continued and increased to a maximum of 33,406.00 around March-April 2022. This figure is expected to fall precipitously until 5 November 2022. The COVID-19 datasets have been extensively used with a wide range of linear and nonlinear time-series models, including ARIMA and machine learning methods [7][8][9]11,13,16,[19][20][21][22][23][24][25][26]. The analysis of daily new positive cases of COVID-19 is critical as an indicator of the effectiveness of preventive measures that have been taken, are being taken, and will be taken by authorities to control the spread of this epidemic more effectively. Therefore, a similar approach to that used by Aisyah et al. [15] is used to inve the performance of the proposal models on daily new positive cases of COV datasets, where the dataset is divided into two samples, known as training samp testing sample. According to Aisyah et al. [15] and Nurul Hila et al. [16], datasets be divided into two parts to achieve the best results: 70-80% for training a remaining 20-30% for testing [39,40]. The training data are used to assemble the m while the testing data are used to evaluate the forecasting performances of the m based on statistical measurements. Thus, the daily new positive cases of the COV dataset are divided into two samples in this study: the training dataset and the test d The training datasets contain 612 observations from day 1 to day 612, accounting f of the datasets used exclusively to formulate from 1 October 2020 to 4 June 2022. In to evaluate the forecasting performance of proposed models, the test sample datase approximately 153 observations from days 613-765 (20%) from the 5 June 2022 t November 2022 Therefore, a similar approach to that used by Aisyah et al. [15] is used to investigate the performance of the proposal models on daily new positive cases of COVID-19 datasets, where the dataset is divided into two samples, known as training sample and testing sample. According to Aisyah et al. [15] and Nurul Hila et al. [16], datasets should be divided into two parts to achieve the best results: 70-80% for training and the remaining 20-30% for testing [39,40]. The training data are used to assemble the models, while the testing data are used to evaluate the forecasting performances of the models based on statistical measurements. Thus, the daily new positive cases of the COVID-19 dataset are divided into two samples in this study: the training dataset and the test dataset. The training datasets contain 612 observations from day 1 to day 612, accounting for 80% of the datasets used exclusively to formulate from 1 October 2020 to 4 June 2022. In order to evaluate the forecasting performance of proposed models, the test sample datasets used approximately 153 observations from days 613-765 (20%) from the 5 June 2022 to the 4 November 2022 Table 5 (Tables 5 and 6 and Figures 3-5), it is possible to conclude that the proposed model produced greater accuracy and efficiency than ARIMA and SVM. New Deaths Cases Data Forecasts In addition to the Malaysian daily new positive COVID-19 cases datasets, the Malaysian daily new deaths cases datasets are taken into account and used to evaluate the performance of the proposed models. This dataset, like the daily new positive dataset and the daily new death case dataset, has a recording period of 1 October 2020 to 4 November 2022 (see Figure 6) and contains 765 data points divided into two samples. As the number of daily positive COVID-19 cases reported rises, so does the number of deaths, which now stands at around 600. The training dataset contains 612 observations (80%) from 1 October 2020 to 4 June 2022, and the test sample contains approximately 153 observations (20%) from 5 June 2022 to 4 November 2022 to evaluate the prediction performance of the proposed model. As shown in Table 7, the performance of the proposed models using the daily new deaths datasets from COVID-19 is first characterized by statistical measurements such as MSE, MAPE, RMSE, and MAE. The results for the training data in this table show that the proposed model produces the smallest MSE and MAE values of 19.6422 and 1.03218, respectively, when compared to ARIMA, SVM, LSSVM and ARIMA-SVM. The same pattern can be seen in the test data, where all the statistical measures used have the lowest values when compared to the ARIMA, SVM, LSSVM, and ARIMA-SVM models. The study then examines the estimated value of the suggested model for the COVID-19 case dataset for daily deaths, as shown in Figure 7a-e. This graph makes it abundantly clear that the proposed model line and the observed data are nearly identical. Additionally, Figure 8a-e each show the estimated values for the test sample for ARIMA, SVM, LSSVM, ARIMA-SVM and ARIMA-LSSVM the suggested models. Once more, it is obvious that when compared to ARIMA, SVM, LSSVM, and ARIMA-SVM models, the test sample lines for our suggested model (Figure 8e) are somewhat close to the actual data. This demonstrates that the outcomes of our suggested model are in line with prior findings and are more effective, accurate, and precise than those of ARIMA, SVM, LSSVM, and ARIMA-SVM models. The number of daily COVID-19 death cases is also plotted, just like in Figure 9. The daily new death cases of COVID-19 in Malaysia are anticipated to A similar approach to the daily new positive cases of the COVID-19 dataset was used to study the performance of the proposed model on the daily new death cases of the COVID-19 dataset. The dataset was divided into two samples, namely training sample and testing sample. It accounts for approximately 80% of the daily new death cases in the COVID-19 dataset for the training sample (involving 612 observations with the period 1 October 2020 until 4 June 2022). The remaining 20% is for the test sample, which includes approximately 153 observations from 5 June 2022 to 4 November 2022. As shown in Table 7, the performance of the proposed models using the daily new deaths datasets from COVID-19 is first characterized by statistical measurements such as MSE, MAPE, RMSE, and MAE. The results for the training data in this table show that the proposed model produces the smallest MSE and MAE values of 19.6422 and 1.03218, respectively, when compared to ARIMA, SVM, LSSVM and ARIMA-SVM. The same pattern can be seen in the test data, where all the statistical measures used have the lowest values when compared to the ARIMA, SVM, LSSVM, and ARIMA-SVM models. The study then examines the estimated value of the suggested model for the COVID-19 case dataset for daily deaths, as shown in Figure 7a-e. This graph makes it abundantly clear that the proposed model line and the observed data are nearly identical. Additionally, Figure 8a-e each show the estimated values for the test sample for ARIMA, SVM, LSSVM, ARIMA-SVM and ARIMA-LSSVM the suggested models. Once more, it is obvious that when compared to ARIMA, SVM, LSSVM, and ARIMA-SVM models, the test sample lines for our suggested model (Figure 8e) are somewhat close to the actual data. This demonstrates that the outcomes of our suggested model are in line with prior findings and are more effective, accurate, and precise than those of ARIMA, SVM, LSSVM, and ARIMA-SVM models. The number of daily COVID-19 death cases is also plotted, just like in Figure 9. The daily new death cases of COVID-19 in Malaysia are anticipated to decline because of this number over the course of the following three weeks, indicating a downward trend. As shown in Table 8, a similar method in the daily addition of positive COVID-19 case dataset is used to investigate the performance of the proposed model for the daily recorded death COVID- 19 Tables 7 and 8 and Figures 7-9) clearly show that our proposed model outperforms the ARIMA, SVM, LSSVM, and ARIMA-SVM models in terms of efficiency and accuracy. New Recovered Cases Data Forecasts The investigation to study the performance of the proposed model is continued with the dataset of new daily recovered cases of COVID-19 in Malaysia. Predicting Malaysia's daily new recovered COVID-19 cases is just as important as the previous two datasets. The data used in this paper include daily observations from 1 October 2020 to 4 November 2022, for a total of 765 data points in the time series ( Figure 10). The number of patients recovered from COVID-19 exhibits the same trend, with a significant increase twice. Beginning in July 2021, the number of recovered patients increases exponentially until it reaches over 22,500.00 in August 2021 (the time-series plot is shown in Figure 10) and then drops. daily new recovered COVID-19 cases is just as important as the previous two datasets. The data used in this paper include daily observations from 1 October 2020 to 4 November 2022, for a total of 765 data points in the time series ( Figure 10). The number of patients recovered from COVID-19 exhibits the same trend, with a significant increase twice. Beginning in July 2021, the number of recovered patients increases exponentially until it reaches over 22,500.00 in August 2021 (the time-series plot is shown in Figure 10) and then drops. However, around March-April 2022, the number of recovered COVID-19 cases increased again to a maximum of 33,872.00, then decreased and showed a relatively stable movement after that. This dataset is also divided into two samples, namely the training However, around March-April 2022, the number of recovered COVID-19 cases increased again to a maximum of 33,872.00, then decreased and showed a relatively stable movement after that. This dataset is also divided into two samples, namely the training dataset and the test dataset. The training dataset, which included 612 observations (80%) from 1 October 2020 to 4 June 2022, was used in the same way as the previous datasets to formulate the model. In contrast, the test sample uses approximately 153 observations (20%) for the period 5 June 2022-4 November 2022. Table 9 displayed the performance of the proposed model on the daily new recovered COVID-19 case datasets based on training and testing samples. The results in Table 9 Figure 11a-e shows the estimated value of the dataset for daily new recovered COVID-19 cases for the test sample. Once more, this graph demonstrates how closely the predicted value from the proposed models seems to match the actual values. Figure 12a-e present an additional analysis of the outcomes of the proposed model. These plots (Figure 12a-e) show the predicted values for the test samples derived from ARIMA, SVM, LSAVM, ARIMA-SVM and ARIMA-LSSVM models. In these models, however, the proposed model is close to the true value because, as we shall see in Figure 11e, the proposed model dominates them. As shown in Figure 13, the number of daily new recovered COVID-19 cases is plotted. This figure makes it abundantly clear that the suggested model maintains the data's original sharpness. The daily new recovered COVID-19 cases for Malaysia are predicted from this figure for the upcoming three weeks, and it suggests that these cases will rise in the days to come in Malaysia. As shown in Table 10, further research was completed to determine how well the proposed models performed for the daily newly recovered COVID-19 case datasets The results reported in the parentheses are the ARIMA, SVM, LSSVM, and ARIMA-SVM models. As a result, based on the findings, the proposed model has produced results that are more accurate and effective than those produced by ARIMA, SVM, LSSVM, and ARIMA-SVM models. Conclusions In conclusion, predicting the spread of COVID-19 with accuracy and efficiency is essential but frequently challenging for decision-makers, especially the front-line workers and health care authorities. Despite what might seem to be an endless spread of COVID-19, there have been numerous efforts to develop time-series models and ongoing research to enhance forecasting model efficacy. One of the most well-liked types of hybrid models that divide time series into linear and non-linear forms is the hybrid approach. In this study, a hybrid model that combines some linear and non-linear predictions is proposed. Utilizing three well-known COVID-19 datasets-daily new positive cases, daily new death cases, and daily new recovered cases-revealed that our proposed models were demonstrated as having the highest efficiency, accuracy, and precision. In comparison to ARIMA, SVM, LSSVM, and ARIMA-SVM models, the proposed model with cross-validation check based on MSE, RMSE, MAE, and MAPE makes the most accurate predictions. In terms of performance (the proposed models compared to ARIMA, SVM, LSSVM and ARIMA-SVM models) for both the training and testing datasets, the proposed models' performance yields the smallest values of MSE, RMSE, MAE, and MAPE. This indicates that the proposed model's predicted value is more closely aligned with the observed value. Therefore, our proposed models had a higher level of precision and could be suggested for COVID-19 forecasting. It can be concluded that the proposed model may be the most efficient and effective way to increase prediction accuracy performance, especially since it is important to anticipate and stop the spread of COVID-19 cases. Limitations and Future Recommendation In this research study, an attempt was made to predict the overall number of confirmed cases, fatalities, and recoveries of COVID-19 in Malaysia. Investigating SVM performance with various kernel functions and developing the best hyperparameters for the SVM forecasting model can help to increase the forecast's accuracy in upcoming work. Since only one-step-ahead forecasting is considered in this paper, multi-step forecasts can be centralised in subsequent work. It has been demonstrated that multi-step forecasts can greatly increase the trading system's realism [41,42]. Additionally, to improve the performance of the model in terms of efficiency and accuracy of dataset prediction, hybrid approaches such as bootstrap and double bootstrap methods [16,43,44] can be considered in the hybridization of ARIMA and SVM. Given the dearth of researchers using bootstrap in daily COVID-19 forecasting cases, it is a reliable method. Numerous studies have demonstrated that the bootstrap resampling method yields a more precise estimate [45]. Future studies should also consider (i) the clinical and behavioural aspects such as actions, cognition, and emotions and (ii) the possibility of the underreporting of cases and deaths, as well as delays in notifications, in order to avoid biased predictions, forecasts, and results.
2023-03-18T15:18:36.903Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "189d5a9a33dcb019eb73416d1272083939527313", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/6/1121/pdf?version=1678938137", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a7922bbf59fc937b361e1fb19dc37d43f76fdbe", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
4845522
pes2o/s2orc
v3-fos-license
Competitive principal of tumor control in radiological clinic In order to verify the principle of indirect control a tumor on the base of morphogenic cells distraction from it, the 114 patients with advanced ovarian carcinoma were treated with subtotal half-body (low part) irradiation at low doses (0,1 Gy x 10 for 3 weeks or 3Gy x 3 daily), and obtained data were compared with that for 190 patients received conventional local irradiation of the tumor (2 Gy x 23 daily). The surgery and chemotherapy components were equalized in both groups. The 34 % and 11% of 5-years survival was obtained at low dose half body irradiation for primary and relapsed patients in comparison with conventional local radiotherapy (7% and 0%). It is concluded, that reparation /regeneration processes being provoked artificially in normal tissues of cancer host are capable to compete remotely with tumor for the morphogenic/feeding cells originated from bone marrow and circulating with the blood. Correspondence to: Shoutko AN, Russian Research Centre of Radiology and Surgical Technologies, Saint-Petersburg, Russian Federation, Russia, E-mail: shoutko@inbox.ru Received: July 20, 2017; Accepted: August 14, 2017; Published: August 17, 2017 Introduction Conventional medicine recognizes a selective killing of tumor cells as only way of fighting with cancer. This way has brought some doubtless benefits in the past, but in the last decades the effectiveness of traditional treatment progresses more slowly, than it would be desirable. The life span of mammals at normal conditions and chronic irradiation as well depends on limit of proliferative capacity of bone marrow given at the birth [1]. Despite this, a strong mielodepression follows inevitably palliative chemoand radiotherapy of cancer. As we argued earlier, the suppression at “therapeutic” range follows by temporary restrictions a morphogenic cells activity inside a tumor [2]. The morphogenic cells (trophocytes / feeding cells) is presented in the blood by hematopoietic stem cells, pro-lymphocytes, angiogenic T-cells and some others [3-5]. There are two ways to restrict the tumor growth’s support by them: 1) either to provoke a repopulation stem cells in bone marrow by its injuring with relatively high “hemotoxic” doses of “curative” factor, or 2) to redirect the circulating morphogenic cells from tumor toward a reparation/regeneration of numerous but nonlethal injuries in different normal cells, induced by relatively low doses of “curative” toxicants [2]. In both cases the mechanism of expected benefit has to be not direct but the mediated by rearrangement of the tissues renewing’s balance between the cancer and host body. The purpose our presentation is to demonstrate the reliability of described “competitive” approach of cancer therapy at real clinic conditions. Method Since 1995 to 2005 the patients with advanced epithelial ovarian cancer were treated in Russian Research Centre of Radiology and Surgical Technologies of Federal Health Ministry, using 5 MV linear accelerator. The half body (low part) irradiation (HBI) were performed for 114 patients (stage III-56, stage IY-21, relapse 37), and they were compared then with 190 of those (stage III-66, stage IY-25, relapse-99) received conventional local radiotherapy (CLR). All patients have received the surgery before irradiation and chemotherapy after it. The surgery (hysterectomy along with bilateral salpingo-oophorectomy and omentectomy) and standard platinum -based chemotherapy (≥ 6 courses) were identical in HBI and CLR groups. All patients in HBI group were divided before irradiation onto 2 subgroups: primary and relapsed ones. The regime 3Gy x 3daily was used for primary patients, regime 0,1 Gy x10 for 3weeks used for relapsed patients. Result Obtained retrospective results (Table 1) prove very clearly the possibility to get best survival without traditional local irradiation of tumor in high, so-called “tumoricydic” total dose 40-50 Gy. These results are quite comparable with modern data published already by National Cancer Institute, USA, and some others for specific-survival [6,7]. Discussion The found distinction between “conventional” and “competitive” therapy may be attributed to the features of a radiation component of both combined schemes. Since 1998, we have been proving indirect mechanism of diminishing the cancer activity by slightly increased natural or artificial background radiation and low dose total/subtotal radiation therapy. In oppose to idea of radiogenic stimulation of anti-cancer immunity [8,9], the mechanism proposed by us bases on the redistribution of circulating morphogenic cells from tumor to exposed normal tissues [10-13], and was statistically tested [14,15]. As a proliferative resource of bone marrow is limited and associated very closely with the life span and the level of lymphopenia [16,17], the HBI with cumulative dose 9 Gy was employed mostly as myelosuppressive Shoutko AN (2017) Competitive principal of tumor control in radiological clinic Volume 1(2): 2-2 Radiol Diagn Imaging, 2017 doi: 10.15761/RDI.1000107 one. The HBI with cumulative dose 1 Gy was assumed to be able to divert the circulating morphogenic sells [18] from tumor without diminishing their number. It is obviously, that both regimes cannot provide the tumor growth control by direct killing of malignant cells [19]. They were rather similar with non-selective cytotoxic chemotherapy of cancer, which cannot damage the tumor cells lethally, as the conventional local radiotherapy does. Otherwise, non-selective chemotherapy would be fatal to the organism. Beside this, a myelosuppressive action of modern combined therapy is not the rare, random event, as the 85% of main anti-cancer drugs are myelodepressants. Hence, the mechanism of any nonselective cytotoxic treatment supposed to be an indirect one also, causing temporary disturbances of cellular reproduction in distant normal tissues [20]. The bone marrow is a main target among them, being the most sensitive/damaged physiological system among those responsible for preservation of life. Conclusion We do not find of any principal objections to continue comprehensive investigation of “competitive” low dose-radiotherapy as an alternative to the nonselective cytotoxic chemotherapy of cancer. References 1. Drapeau C (2010) Cracking the stem cell code: demystifying the most dramatic scientific breakthrough of our times. Sutton Hart Press, Portland. 2. Shoutko AN, Ekimova LP (2014) Lymphocytopenia can contribute in common benefit of cytotoxic therapy of cancer. Inter Medical 3: 5-13. 3. Kucia M, Ratajczak J, Ratajczak MZ (2005) Bone marrow as a source of circulating CXCR4+ tissue-committed stem cells. Biol Cell 97: 133-146. [Crossref] 4. Shoutko A, Ekimova L, Mus V, Sokurenko V (2012) Fluctuations of CD34 cells number in blood of cancer patients during final year of life. MHSJ (Acad Publ Platform) 13: 7-13. 5. Shoutko AN, Gerasimova OA, Ekimova LP, Zherebtsov FK, Mus VF, et al. (2016) Long-term activation of circulating liver-committed mononuclear cells after OLT. Jacobs Journal of Regenerative Medicine : 011. 6. Colombo N, Van Gorp T, Parma G, Amant F, Gatta G, et al. (2006) Ovarian cancer. Crit Rev Oncol Hematol 60: 159-179. [Crossref] 7. Wright JD, Chen L, Tergas AI, Patankar S, Burke WM, et al. (2015) Trends in relative survival for ovarian cancer from 1975 to 2011. Obstet Gynecol 125: 1345-1352. [Crossref] 8. Sakamoto K (2004) Radiobiological bases for cancer therapy by total or half-body irradiation. Nonlinearity Biol Toxicol Med 2: 293-316. doi: 10.1080/15401420490900254 [Crossref] 9. Scott BR (2008) Low-dose-radiation stimulated natural chemical and biological protection against lung cancer. Dose Response 6: 299-318. [Crossref] 10. Shoutko A, Shatinina N (1998) Chronic cancer could it be? Coherence-Int J of Integrative Medicine 2: 36-40. 11. Shoutko AN Ekimova LP (2014) The impact of middle age on the viability of patients with nonmalignant and malignant diseases. Cancer Research Journal 2: 114-120. 12. Shoutko AN, Ekimova LP (2014) Abnormal tissue proliferation and life span variability in chronically irradiated dogs. Radiat Environ Biophys 53: 65-72. [Crossref] 13. Shoutko AN, Ekimova LP (2017) The effects of tissue regenerative status on hormesis in dogs irradiated during their lifespan. Open Journal of Biophysics 7: 101-115. 14. Shoutko AN, Yurkova LE, Borodulya KS, Ekimova LP (2015) Lymphocytopenia and cytotoxic therapy in patients with advanced ovarian cancer. Cancer Research Journal 3: 47-51. 15. Shoutko AN, Yurkova LE, Borodulya KS, Ekimova LP (2016) Protracted half-body irradiation instead of chemotherapy: life span and lymphocytopenia in relapsed ovarian cancer. International Journal of Tumor Therapy 5: 1-7. 16. Common terminology criteria for adverse events v3.0 (CTCAE) (2003) March 31. Publish Date: 9 August 2006, American National Standards Institute. 17. Rimando J, Campbell J, Kim JH, Tang SC, Kim S (2016) The Pretreatment Neutrophil/ Lymphocyte Ratio Is Associated with All-Cause Mortality in Black and White Patients with Non-metastatic Breast Cancer. Front Oncol 6: 81. [Crossref] 18. Rennert RC, Sorkin M, Garg RK, Gurtner GC (2012) Stem cell recruitment after injury: lessons for regenerative medicine. Regen Med 7: 833-850. [Crossref] 19. Heinzerling JH, Cho J, Choy H (2011) The role of radiotherapy in the treatment of metastatic diseases in Laden D, Welch DR, Paola B, Eds. Cancer metastasis: biologic basis and therapeutics. New York, USA; Cambrige University Press, pp. 612-621. 20. Malhotra V, Perry MC (2003) Classical chemotherapy: mechanisms, toxicities and the therapeutic window. Cancer Biol Ther 2: 2-4. Mode of combined treatment and status of cancer Conventional “Competitive” Local irradiation (50 Gy*) Subtotal irradiation (1÷9Gy*) Primary 6,6% 33,8% p<0,01 Relapse 0% 10,8% P<0,05 *cumulative doses are shown; p-value calculated according exact Fisher-test Table 1. The comparison of the overall 5-years survival after conventional and Introduction Conventional medicine recognizes a selective killing of tumor cells as only way of fighting with cancer. This way has brought some doubtless benefits in the past, but in the last decades the effectiveness of traditional treatment progresses more slowly, than it would be desirable. The life span of mammals at normal conditions and chronic irradiation as well depends on limit of proliferative capacity of bone marrow given at the birth [1]. Despite this, a strong mielodepression follows inevitably palliative chemo-and radiotherapy of cancer. As we argued earlier, the suppression at "therapeutic" range follows by temporary restrictions a morphogenic cells activity inside a tumor [2]. The morphogenic cells (trophocytes / feeding cells) is presented in the blood by hematopoietic stem cells, pro-lymphocytes, angiogenic T-cells and some others [3][4][5]. There are two ways to restrict the tumor growth's support by them: 1) either to provoke a repopulation stem cells in bone marrow by its injuring with relatively high "hemotoxic" doses of "curative" factor, or 2) to redirect the circulating morphogenic cells from tumor toward a reparation/regeneration of numerous but nonlethal injuries in different normal cells, induced by relatively low doses of "curative" toxicants [2]. In both cases the mechanism of expected benefit has to be not direct but the mediated by rearrangement of the tissues renewing's balance between the cancer and host body. The purpose our presentation is to demonstrate the reliability of described "competitive" approach of cancer therapy at real clinic conditions. Method Since 1995 to 2005 the patients with advanced epithelial ovarian cancer were treated in Russian Research Centre of Radiology and Surgical Technologies of Federal Health Ministry, using 5 MV linear accelerator. The half body (low part) irradiation (HBI) were performed for 114 patients (stage III-56, stage IY-21, relapse 37), and they were compared then with 190 of those (stage III-66, stage IY-25, relapse-99) received conventional local radiotherapy (CLR). All patients have received the surgery before irradiation and chemotherapy after it. The surgery (hysterectomy along with bilateral salpingo-oophorectomy and omentectomy) and standard platinum -based chemotherapy (≥ 6 courses) were identical in HBI and CLR groups. All patients in HBI group were divided before irradiation onto 2 subgroups: primary and relapsed ones. The regime 3Gy x 3daily was used for primary patients, regime 0,1 Gy x10 for 3weeks used for relapsed patients. Result Obtained retrospective results (Table 1) prove very clearly the possibility to get best survival without traditional local irradiation of tumor in high, so-called "tumoricydic" total dose 40-50 Gy. These results are quite comparable with modern data published already by National Cancer Institute, USA, and some others for specific-survival [6,7]. Discussion The found distinction between "conventional" and "competitive" therapy may be attributed to the features of a radiation component of both combined schemes. Since 1998, we have been proving indirect mechanism of diminishing the cancer activity by slightly increased natural or artificial background radiation and low dose total/subtotal radiation therapy. In oppose to idea of radiogenic stimulation of anti-cancer immunity [8,9], the mechanism proposed by us bases on the redistribution of circulating morphogenic cells from tumor to exposed normal tissues [10][11][12][13], and was statistically tested [14,15]. As a proliferative resource of bone marrow is limited and associated very closely with the life span and the level of lymphopenia [16,17], the HBI with cumulative dose 9 Gy was employed mostly as myelosuppressive one. The HBI with cumulative dose 1 Gy was assumed to be able to divert the circulating morphogenic sells [18] from tumor without diminishing their number. It is obviously, that both regimes cannot provide the tumor growth control by direct killing of malignant cells [19]. They were rather similar with non-selective cytotoxic chemotherapy of cancer, which cannot damage the tumor cells lethally, as the conventional local radiotherapy does. Otherwise, non-selective chemotherapy would be fatal to the organism. Beside this, a myelosuppressive action of modern combined therapy is not the rare, random event, as the 85% of main anti-cancer drugs are myelodepressants. Hence, the mechanism of any nonselective cytotoxic treatment supposed to be an indirect one also, causing temporary disturbances of cellular reproduction in distant normal tissues [20]. The bone marrow is a main target among them, being the most sensitive/damaged physiological system among those responsible for preservation of life. Conclusion We do not find of any principal objections to continue comprehensive investigation of "competitive" low dose-radiotherapy as an alternative to the nonselective cytotoxic chemotherapy of cancer. Copyright: ©2017 Shoutko AN. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
2019-03-17T13:11:45.756Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "9989a89be43999bc6cf40105f2e2ec96fce9a06b", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/RDI-1-107.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "45b024f42a170ac797629c521f2a9c38b20380cb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
42515755
pes2o/s2orc
v3-fos-license
Comparison of Systemic Oral Malodor in Patients Undergoing Hemodialysis and Peritoneal Dialysis * Chronic renal failure is one of the major cause of systemic oral malodor depending on uremia. Hemodialysis(HD) and periotoneal dialysis (PD) are the important procedures in the management of patients with end-stage renal disease(ESRD). In this study it was aimed to compare the systemic oral malodor in patients undergoing HD and PD. 74 patients (40 HD and 34 PD) recently diagnosed with ESRD were selected. This study were not included the patients with poor oral hygiene and had oral malodor depending on any intraoral etiology such as caries, periodontal disease and impacted teeth. Oral hygiene index(OHI) scores of the patients were calculated in to order assess oral health. Systemic oral malodor of the patients were calculated in order assess oral health. Systemic oral malodor of the patients were evaulated using organoleptic method. All measurements were performed pre-dialysis and postdialysis (3 months after therapy) procedures. There were no statistically significant difference between the groups according to OHI scores (p>0.05). The Oral malodor scores were found lower at post dialysis measurement than the baseline measurements in both group(p<0.05). The results of the organoleptic measurements indicated that systemic oral malodor were higher in HD group (2.67±0.81) compared to PD group (1.98±0.57) (p<0.05). This study revealed that PD was more effective than HD in decreasing of systemic oral malodor in ESRD patients. INTRODUCTION In describe of unwanted breath halitosis, oral malodor or bad breath terms can be used.Halitosis is a widespread complaint among adults all over the world.Studies reported that prvelance of halitosis ranging from %22 to more than %50.Halitosis has multifactorial etiology; extrinsic and intrinsic factors play a role in the etiology of halitosis (1,2).Extrinsic factors consist specific food, alcohol, tobacco and specific spices.Intrinsic factors consist both systemic and oral factors (1).Halitosis may derived from periodontal disease, peri-implant disease, pericoronitis, low salivary flow rate, oral mucosal ulcerations, defective dental restorations, necrotic tooth pulps, a tongue coating (3)(4)(5)(6)(7)(8)(9)(10)(11)(12).In 90% of halitosis case intraoral factors are reason.A clinical evaluation of halitosis on Belgium, indicated that 76% of these patients had oral reasons; gingivitis/periodontitis (11%), a tongue coating (43%) or a combination of the two reasons (13).However 10% of cases derived from systemic factors (3,14,15).Systemic factors consist nonpathologic and pathologic factors (1).Systemic factors include diabetus mellitus, destruction of the liver and renal failure.Diabetic ketoacidosis causes to typical breath odor (16).Diabetic patients have sweet or fruity odor of acetone (17,18).However liver failure leads to mosty or rarely sulfurous odor (18).And one third of patients receiving hemodialysis have ammonia-like oral odor (19).This malodor in renal disease patients can be associated with low salivary flow rates and high blood urea nitrogen levels.Peritoneal dialysis (PD) can reduce this problem (20).Howewer the effect of Hemodialysis (HD) on oral malodor is unknown.So we aimed to was investigate and compare the systemic oral malodor in patients undergoing HD and PD before and after the treatment. MATERIALS AND METHODS A total of 74 patients (40 HD and 34 PD patients); recently diagnosed with ESRD were recruited from the Department of Nephrology, Faculty of Medicine, Atatürk University, Turkey.All patients in this study were recently diagnosed with ESRD and had recently initiated PD or HD.Before enrollment, each patient consented to a review protocol.All procedures followed the tenets of the Declaration of Helsinki and the study protocol was approved by the Local Ethics Committee of Atatürk University. In this study, the twin-bag system was employed in all patients and different kinds of PD fluid (Baxter Healthcare and Fresenius Medical Care) were used.All patients were on continuous ambulatory PD.All of HD patients were on standard HD therapy as 4 hour 3 times a in a week.Patients that were taking medications including tricyclic antidepressants, anticholinergics, antihistamines, and beta-blockers, receiving radiation therapy, or using any tobacco or alcohol products were excluded from this study.Also, patients with sinusitis, nasal septal deviation, lower respiratory tract infection, gastric reflux, liver failure, or diabetes mellitus were excluded from this study.All patients received oral hygiene education 15 days be¬fore PD and HD therapy as a standard procedure.In the initial examination, we first eliminated possible oral factors causing halitosis, such as periodontal problems and dental decay.After this elimination, we evaluated dental health just before measuring halitosis levels.Assess¬ment of dental health consisted of two parts: decayed, missing, and filled teeth (DMFT) index for the incidence of dental caries and the Oral hygiene index (OHI).One examiner, who had been trained for caries and periodontal assessment, performed all the examinations (G.E.D.).Both dental examinations were performed using a mouth mirror and a Williams periodontal probe, to determine the periodontal index.For the examination of DMFT caries index, the examiner recorded sum of the teeth as decayed (D), missing (M), and filled (F) according to the WHO criteria for each patient.OHI is the sum of Debris index and Calculus index.In the debris index, soft deposits were used (21).This is a well-validated index of dental plaque that has been used in dental research for more than 40 years.The categories are as follows: 0=No debris or stains present, 1=Soft debris covering not more than one-third of the tooth surface or presence of extrinsic stain, 2=Soft debris covering more than one-third but not more than two-thirds of the tooth surface, 3=Soft debris covering more than two-thirds of the tooth surface.In calculus index, 0= No calculus present, 1= Supragingival calculus covering less than third of the exposed tooth surface, 2= Supragingival calculus covering more than one third but not more than two thirds of the exposed tooth surface or the presence of subgingival calculus around the cervical portion of the tooth or both, 3= Supragingival calculus covering more than two third of the exposed tooth surface or a presence heavy band of subgingival calculus around the cervical portion of the tooth or both.The DMFT index and OHI were calculated before and after PD and HD therapy.We used the organoleptic scale described by Rosenberg and colleagues to measure halitosis (22) The organolep¬tic scale ranges from 0 to 5, where 0 = no odor, 1 = barely noticeable odor, 2 = slight but clearly noticeable odor, 3 = moderate odor, 4 = strong odor, and 5 = extremely foul odor.All patients were required to refrain from eating and drinking 8 hours prior to the test and to avoid eating garlic and onions within 24 hours before the assessment.They were also asked to abstain from tooth brushing, us¬ing toothpaste, mouthwash, breath fresheners, scented cosmetics, or grooming aids on the morning of testing.All subjects were tested within a few consecutive days between 08:00 and 09:00 hours.The organoleptic evaluation panel consisted of 3 researchers that were professionals in oral health.The researchers were blinded to the status of PD and HD.Further, the organoleptic test was conducted using a screen that concealed the researchers from the individuals (to avoid the influence of individuals' appearance on judgment) and a sterile glass tube (10 cm in length and 2 cm in diameter), which was fitted into a hole in the screen.Each patient was requested to close his/her mouth for 1 to 2 minutes prior to sampling and place about 4 cm of the glass tube into his/her mouth, then slowly exhale his/her breath through the glass tube.This step was repeated during each test.Three researchers assessed halitosis levels individually and each was blinded to the other researchers' decisions.Organoleptic scores were recorded on an ordinal scale independently by each researcher.The average of scores was calculated. Statistical Analyses Data are presented as frequencies, percentages, means, and standard deviations.Statistical analyses were carried out using SPSS 15 statistical software (SPSS Inc., Chicago, IL, USA).Halitosis, DMFT index, and OHI values obtained before and after PD and HD therapy were compared by paired t-test.The halitosis scores measured by each researcher were compared by Friedman variance analysis.Pearson's correlation analysis was done to compare halitosis with OHI and DMFT.The level of significance was set to p < 0.05. DISCUSSION Halitosis has a large social and economic impact.For the majority of patients suffering from bad breath is important.In general, intraoral conditions, like insufficient dental hygiene, periodontitis or tongue coating are considered to be the most important cause (85%) for halitosis (23).Non-oral causes of oral malodor have received attention in the dental literatures, particularly because of the clinical importance of early diagnosis.Chronic renal failure(CRF) is related a small but significant percentage of halitosis (1,2).Renal impairment is normally a result of a chronic glomerulonephritis, which damage the glomerular function, leading to an increased urea level in the blood.Breathed air is described as ammonium-like breath and generally is accompanied by complaints of dysgeusia (salty taste) (24). The primary reference standard for the detection of oral malodor is the human nose.Direct sniffing of expired air (organoleptic and hedonic assessment) is the simplest, inexpensive, no equipment needed and a wide range of odours detectable.Although the method presents several problems such as; the extreme subjectivity of the test, the lack of quantification, the saturation of the nose and the reproducibility (25).Although these disadvantages, still, organoleptic scoring is considered as the gold standard in the detection of oral bad breath and it is most common method to evaluate oral malodor.Before halitosis may be managed effectively, an accurate diagnosis must be achieved (26).Thus the treatment of this problem can resolve by physicians and/or dental clinicians (27,28). The first step in the treatment of oral malodor is to assess the patient for any oral diseases or conditions that may cause oral malodor (1).For disease-free people, current oral malodor treatment is based on the assump¬tion that the malodor is the result of an overgrowth of oral microorganisms that produce offensive volatile compounds.If it is determined that the source of malodor is not in the oral cavity, the patient should be referred to a physician for treatment of any related systemic disease (1).In the present study, we first eliminated possible oral factors that can cause halitosis.After this elimination, we measured halitosis level related to ESRD. We used OHI to evaulate dental hygiene.There were no statistically significant difference between the groups according to OHI scores.This can be because of both HD and PD patients perform resemble oral (self) care.The Oral malodor scores were found lower at post dialysis measurement than the baseline measurements in both group.Keleş et al. (20) observed that, as the BUN levels decreased, the severity of halitosis also decreased in a parallel manner.In dialysis patients a change in salivary composition regarding urea has also been reported (29,30).Renal disease in the form of CRF is associated with high blood urea nitrogen levels and low salivary flow rates.PD can decrease this problem (20).Keles et al. (20) found, higher salivary urea values in the dialysis group than in the control group, thus supporting findings of Epstein et al. (30).Uremic odor could be associated with accumulation of urea in the saliva (19).Indeed, a higher incidence of uremic odor may correlate with higher urea in the saliva of CRF patients (29).The level of salivary urea, which may have supported this idea, was not included in this study and should be further investigated by other studies.Trimethylaminuria is a rare odor-producing metabolic disease with symptoms of dysgeusia (perversion of the sense of taste)/dysosmia (defect or impairment of the sense of smell) that are due to excess production of trimethylamine [(CH3)3N]. Uremia that is caused by kidney failure also produces (CH3)3N, along with dimethylamine (1).Dialysis involves the removal of urea and other toxic substances from the plasma as well as the correction of electrolyte imba-lance.Of the two methods of dialysis, HD is the most commonly used method in which, blood is passed through an extra corporeal circuit and pumped across an artificial semi permeable membrane to bring the blood into contact with the dialysate.The second method is the intermittent and continuous ambulatory peritoneal dialysis (PD).This method utilizes the peritoneal membrane, as the semi permeable membrane, with capillaries on one side and high osmotic fluid infused into the peritoneal cavity on the other side.The peritoneal cavity is drained and the cycle is repeated after a suitable time to allow the equilibration of diffusible substances. The results of the organoleptic measurements indicated that systemic oral malodor were higher in HD group compared to PD group.Both of the groups have sample OHI scores, so this can be because the effect of systemic urea on oral malodor.Both HD and PD treatment cause systemic changes, oral complications and alterations in salivary composition and output (30,31).Keles et al. found that uremic patients had lower salivary flow rates, which were found to be related to halitosis.This may be the result of accumulation and putrefaction of oral epithelial debris, food, low oxygen concentration, reduced availability of carbohydrates as bacterial substrate, and high oral pH.Dysgeusia and uremic fetor, bad taste and odour are caused not only by xerostomia but also by the presence of urease-splitting oral organisms, which metabolize urea (present in high levels in these patients).This can be a factor in HD patients to have high malodor levels. The prevention of oral malodor is very important in patients with CRF because it leads to discomfort and psychosocial embarrassment.Dentists, oral hygienists, and medical doctors can help uremic patients to reduce their level of halitosis.It may be achieved via PD and HD treatment by decreasing the BUN level.By this way, patients can feel better and more confident in their daily lives. The present observations suggest that ESRD patients who takes dilaysis therapy have high oral malodor levels before dialysis treatment.Also HD patients has high oral malodor scores than the PD patiens.Because there were no significant differences between OHI, the ob-served decrease in halitosis level may not be related to debris and oral situations, may be related urea level.Furthermore, PD and HD therapy may play an important role in decreasing the level of halitosis in such patients and PD is more effective in decrease of oral malodor levels than the HD. Table 1 . Mean levels of DMFT, OHI and halitosis in Periotoneal dialysis and Hemodialysis patients. Table 2 . Comparison of mean levels of DMFT, OHI and halitosis between Periotoneal dialysis and Hemodialysis patients.
2017-09-06T09:39:00.350Z
2014-10-15T00:00:00.000
{ "year": 2014, "sha1": "60db4326560e8e8009f3d70eaf26853cb45dc5fe", "oa_license": "CCBY", "oa_url": "http://www.ejgm.co.uk/pdf-82055-17506?filename=Comparison%20of%20Systemic.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "60db4326560e8e8009f3d70eaf26853cb45dc5fe", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
246812004
pes2o/s2orc
v3-fos-license
Comparison of Accrual Ratio and Cash Ratio Accuration in Financial Reports Accounting and financial reporting fraud has been happening lately. Failure in estimating the veracity of financial reports starts from many aspects of accruals in the preparation of financial reports. This study seeks to explain the flow of financial transactions as important information, considering that activities in the financial sector require quick and relevant decisions. The flow of transactions in the financial reports consists of cash flows and accruals. Finance in business is almost similar to direct current and alternating current. In fact, this is misleading because of the ignorance of the readers of financial reports in interpreting profit, even though the misinterpretation will have an impact on investing errors. This study tries to analyze the results of financial ratio investments using the approach to the ratio of factors in the form of cash and accruals. Hermenuetics qualitative approach is used with data sourced from the Indonesia Stock Exchange. This study uses a sample of manufacturing companies listed on the Indonesia Stock Exchange. Three forms of financial ratio analysis are used, namely the analysis of ROA (return of assets), ROI (return of investment), and ROE (return of equity). Researchers measure using a comparison of ROA, ROI, and ROE based on accruals and cash. The results of the comparison of the accuracy of the accrual ratio and the cash ratio in the financial reports are presented further in INTRODUCTION Every company has financial reports that aim to provide information regarding the financial position, performance, and changes in the financial position of a company that is useful for a large number of users of financial reports in making economic decisions (Maidoki, 2013). Financial reports must be prepared periodically for interested parties. Financial reports provide financial information of a company that can be used in making economic decisions and show the performance that has been carried out by management (stewardship) or management's responsibility for the use of the resources entrusted to it (Shakespeare, 2020). Financial reports are reports that describe the results of the accounting process that are used as a communication tool for parties with an interest in financial data or company activities. The recent spike in fraud cases has caused the image of the accounting profession to decline. The phenomenon in the field also shows an increase in fraud cases in recent times, as stated in a previous study (Xin et al., 2018). Incidents like this in the world of accounting involve accrual elements (recognition), where accrual elements are very easy to engineer. To anticipate the impact of this, the accounting profession is taking steps by requiring companies to make cash-based financial reports which we know as cash flow reports (Osiichuk & Mielcarz, 2021). As we all know that financial reports are the basis for making investment decisions, which then derives various types of analytical tools such as financial ratios, future financial estimates that are used for investment decision making. Errors in financial reports are actually only known in detail by accountants, not other people. However, the fact is that other people also need a lot of things in financial reports, especially the use of financial ratios. No one can blame investors when they base their decision making based on financial ratios, but the important question is how accurately they can understand financial ratios that are sourced from financial reports that contain a lot of accrual elements, and when the financial ratios that are composed still contain accrual elements. This can lead to misinformation that can be fatal in investing. This study tries to analyze the problems contained in financial ratios when investors want to invest based on ratio analysis. The purpose of this study is to minimize the bias from the results of the ratio analysis for the purpose of estimating the accuracy of future investments. The company's financial ratios are very important for a potential investor to determine how much investment can be given. The results of the analysis can also be used as a reference for business development (Minh-Trang et al., 2017). A ratio is one number that is compared to another number as a relationship. Financial ratio analysis is the process of observing indexes related to accounting in financial reports such as balance sheets, profit and loss reports and cash flow reports with the aim of assessing the financial performance of a company. This analysis is used to provide an overview of information about the company's financial position and performance that can be used as a guide in making business decisions (Krylov, 2018). Financial ratio analysis is used by two main users, namely investors and management. Investors use financial ratios to see if the company is a good investment or not. By comparing financial ratios between companies and between industries, investors can determine which investment is the best. Meanwhile, management uses financial ratios to determine how well the company is performing to evaluate where the company can improve itself (Khalid et al., 2020). For example, if a company has low gross margins, managers can evaluate how to increase their gross margins. The general function of financial ratio analysis is useful for management and investors, among others: useful for a person/company who wants to invest in stocks; provide credit to a company; determine the level of health and development of the company, including the importance of evaluation; determine the level of competitor's financial strength (positioning); and determine the level of damage faced by the company. There are several types of financial ratios, including: liquidity ratio, activity ratio, solvency ratio, profitability ratio, and investment ratio (Ponikvar et al., 2009). The liquidity ratio is a ratio that measures the short-term liquidity ability of a company by looking at the company's current assets relative to its current liabilities. Activa ratio looks at several assets and then determines what level of activity these assets are at a certain level of activity. Low activity at a certain level of sales will result in greater excess funds embedded in these assets. The excess funds would be better if invested in other more productive assets. Profitability ratio is a ratio that shows the level of return or gain (profit) compared to sales or assets. The investment ratio is a ratio that measures the company's ability to provide returns or rewards to funders, especially investors in the capital market within a certain period of time. This ratio has a value of benefit for investors according to the function of financial reports for investors to assess the performance of stock securities in the capital market (Ponikvar et al., 2009). Investment can be interpreted as a commitment to invest a certain amount of funds at this time with the aim of obtaining profits in the future. In other words, investment is a commitment to sacrifice current consumption with the aim of increasing consumption in the future. Another understanding of investment is a form of investing funds or capital to generate wealth, which will be able to provide a profit rate of return (return) both now and in the future. Expected future profits are compensation for the time and risk associated with the investment made. In the context of investment, the expected profit is often referred to as return (Brimberg et al., 2008). Basically the purpose of investors in investing is to maximize returns. There are several sources of risk that can affect the amount of investment risk, including: interest rate risk, business risk, financial risk (Duy Bui et al., 2021;Mouna & Anis, 2016). To reduce risk, investors need to diversify. Diversification shows that investors need to form investment portfolios in such a way that risk can be minimized without reducing the expected return. Reducing risk without reducing return is the goal of investors in investing. Portfolio theory says don't put all the eggs in one basket, because if the basket falls, all the eggs in the basket will break. Likewise with the investments made, do not invest all of the funds in one form of investment, because when the investment fails, then all the invested funds may not return. This portfolio theory has taught the concept of portfolio diversification quantitatively. Portfolio is defined as a series of investment securities that are invested and held by investors, both individuals and entities. The combination of assets/assets can be in the form of real assets, financial assets or both (Beyhaghi & Hawley, 2013). Usually an investor in investing does not just choose one stock, but does a combination. The reason is that by combining stocks, investors can achieve optimal returns and at the same time can minimize risk through diversification (Okunevičiūtė-Neverauskienė et al., 2021). In other words, if an investor collects several securities to be used for investment, it means that the investor has formed a stock portfolio, the goal is to diversify in investment, which can minimize the risk faced by investors when compared to investing in individual stocks. However, choosing the optimal portfolio is not easy. Diversification is done to reduce portfolio risk, namely by combining or by adding investments (assets/assets/securities) that have a low negative or positive correlation so that the variability of returns or risk can be reduced. Theories of thinking about investment cannot be separated from financial reports. Financial reports present data that still contains estimated values, such as depreciation, accounts payable, allowance for losses on receivables and other accrual items that lead to profit and loss reports. Moreover, it is coupled with the decreasing trust in the accounting profession due to the many cases of fraud. The next question is if the financial reports present a lot of estimated values, then the financial ratios that are sourced from financial reports automatically contain many estimates. In the end, users or most investors will experience 2 (two) times the estimation error. The first is the estimation error in the financial reports and the second is the error in the estimation of the ratio that is incorrectly sourced in the financial reports. In the end who can accept such a risk? Let's move a little towards a more tangible basis, namely the cash basis, or better known as cash flow. To make cash flow reports, accountants need two important data, namely cash receipts records and cash disbursements records. Records of cash receipts are like cash income, investments made in cash. While cash disbursements records include expenses for paid expenses, expenses for investments with the aim of business expansion are also included in this activity. Cash flow reports are also made because they have many benefits that you will get. One of them is to see the financial position quickly and easily. Net cash flow shows a positive number, which means the company gets a profit or profit and vice versa if it shows a negative number it means the company has a deficit. In addition, cash flow reports can also be important information (supporting documents) to assess a company. Through cash flow reports, interested parties such as creditors, investors, and others can directly assess. METHOD This study aims to determine the accuracy of cash and accrual-based financial ratios. This research is a research using a qualitative hermenuetics approach. The hermenuetics qualitative approach is intended to interpret the phenomena that occur in the field using secondary data from the Indonesia Stock Exchange. This study uses a sample of manufacturing companies listed on the Indonesia Stock Exchange, which are 183 companies. Data collection and analysis methods are almost the same as other research methods, this research uses a series of processes and techniques to extract data in the field. Data mining in the field, researchers do by means of secondary data analysis techniques in the capital market through analysis of financial reports from the side of the flow of financial ratio transactions. There are 3 forms of financial ratios that are commonly used, namely the analysis of ROA (return of assets), ROI (return of investment), and ROE (return of equity). In this study, researchers measured using a comparison of ROA, ROI, and ROE based on accrual, and analysis of ROA, ROI, and ROE based on cash. The results of the analysis of the company's cash flow are processed, compared, and used as a basis for drawing conclusions which one is better between accrual-based and cashbased financial ratios. RESULTS AND DISCUSSION As previously explained, the purpose of this study is to determine the accuracy of cash and accrual-based financial ratios. Failure to estimate the correctness of financial reports starts from many aspects of accruals in the preparation of financial reports. This research explains that it needs to be done considering that activities in the financial sector require quick and relevant decisions. The flow of transactions in financial reports consisting of cash flows and accruals will be compared with capital market returns to determine investor response in interpreting investments based on financial ratio analysis. This study uses a sample of manufacturing companies listed on the Indonesia Stock Exchange, which are 183 companies with a final result of 175 because a number of 8 companies have incomplete data. The following is presented research data. There are 3 forms of ratios used in analyzing financial ratios to stock returns, namely analysis of ROA, ROI, and ROE based on accruals, and analysis of ROA, ROI, and ROE based on cash. The following is the result of the comparison of stock returns (investment) based on a comparison chart. The results of the comparison using the ROA accrual and ROA cash approach ( Figure 1) show a unidirectional relationship, namely an increase in ROA accruals followed by an increase in ROA cash. Thus, when viewed from the trend of the stock return chart, the chart tends to follow the movement pattern of the cash ROA basis. Figure 2. Graph of ROI Accrual and ROI Cash The results of the comparison using the ROI accrual and ROI cash approach show a unidirectional relationship (Figure 2), namely an increase in ROI accruals followed by an increase in ROI cash. When viewed from the trend of the stock return chart, the stock return chart tends to follow the movement pattern of ROI accruals and ROI cash. Figure 3. Graph of ROE Accrual and ROE Cash Comparison using the ROE accrual and ROE cash approach shows an inverse relationship (Figure 3), namely when accrual ROE increases, cash ROA decreases. However, when viewed from the trend of the stock return chart, the chart tends to follow the movement pattern of the cash ROE basis. The results of this study support the theory of thought that investment cannot be separated from financial reports, where financial reports present data that still contains estimated values, such as depreciation, accounts payable, reserves for losses on receivables and other accrual items that lead to profit and loss reports (Tran & Duong , 2020). Moreover, coupled with the decreasing trust in the accounting profession due to the many cases of fraud, so that if the financial reports present a lot of estimated values, the financial ratios that are sourced from financial reports automatically contain many estimates. In the end, users or most investors will experience 2 (two) times the estimation error. The first is the estimation error in the financial reports and the second is the error in the estimation of the ratio that is incorrectly sourced in the financial reports. So that investors rely more on reports that are not estimating, but certain, namely the cash approach. CONCLUSION Financial ratio analysis studies on stock returns have been carried out, namely analysis of ROA, ROI, and ROE based on accruals, and analysis of ROA, ROI, and ROE based on cash. The ROA accrual approach and cash ROA show a unidirectional relationship, namely an increase in accrual ROA followed by an increase in cash ROA. Likewise, the accrual ROI and cash ROI approaches show a unidirectional relationship. However, the ROE accrual approach and cash ROE show an inverse relationship, namely when accrual ROE increases, cash ROA decreases. The results of the study provide evidence that the financial ratio approach based on the Cash approach provides more useful information for investors in investing. There are several shortcomings in this study, namely this study only uses one year of observation data and uses manufacturing companies. It is suggested that the next research can use a different company with a longer observation period. RECOMMENDATION Theoretically, the results of this study provide a theoretical contribution that the cash approach is more useful than accrual. Practically, this research provides a discourse for investors to conduct cash analysis. Finally, it is expected that policy makers such as the government will prioritize the cash approach in decision making.
2022-02-14T16:08:58.514Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "a446ddd1dd20a8b57af3b6da541a9578e571e483", "oa_license": "CCBY", "oa_url": "http://e-journal.undikma.ac.id/index.php/prismasains/article/download/4609/3082", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5143fc0d1674c3fedd95b76552305abfa3b41176", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
260274917
pes2o/s2orc
v3-fos-license
The role of gut microbiome in the complex relationship between respiratory tract infection and asthma Asthma is one of the common chronic respiratory diseases in children, which poses a serious threat to children's quality of life. Respiratory infection is a risk factor for asthma. Compared with healthy children, children with early respiratory infections have a higher risk of asthma and an increased chance of developing severe asthma. Many clinical studies have confirmed the correlation between respiratory infections and the pathogenesis of asthma, but the underlying mechanism is still unclear. The gut microbiome is an important part of maintaining the body's immune homeostasis. The imbalance of the gut microbiome can affect the lung immune function, and then affect lung health and cause respiratory diseases. A large number of evidence supports that there is a bidirectional regulation between intestinal flora and respiratory tract infection, and both are significantly related to the development of asthma. The changes of intestinal microbial components and their metabolites in respiratory tract infection may affect the occurrence and development of asthma through the immune pathway. By summarizing the latest advancements in research, this review aims to elucidate the intricate connection between respiratory tract infections and the progression of asthma by highlighting its bridging role of the gut microbiome. Furthermore, it offers novel perspectives and ideas for future investigations into the mechanisms that underlie the relationship between respiratory tract infections and asthma. . Introduction Asthma is a common chronic respiratory disease in children, with a large number of morbidity and mortality worldwide (Johnson et al., 2021). To improve the prevention and treatment of asthma, it is crucial to understand the risk factors and underlying mechanisms that contribute to asthma's onset and exacerbation. Patients with asthma may experience an acute exacerbation of symptoms and loss of control of the disease, even with appropriate management, which is usually triggered by respiratory infections (Fujitsuka et al., 2011;Choi et al., 2018). Respiratory viral infections are the most common cause of asthma exacerbation throughout childhood and beyond (Johnston et al., 1996;Zheng et al., 2020;Chen et al., 2023). There is significant evidence suggesting that respiratory infections play a role in the development of asthma, but the underlying mechanisms have yet to be fully elucidated (Busse et al., 2010;Jackson and Gern, 2022). The gut microbiome is the most abundant microbial community in humans and mice, consisting of ∼4 × 10 14 species of bacteria that have coevolved with the host immune system to establish a symbiosis. The diverse range of gut bacteria is crucial for maintaining the host's immune balance and is associated with the development of various diseases, including asthma, viral infections, and other respiratory diseases (Eckburg et al., 2005). Multiple clinical studies and animal experiments have confirmed the correlation between gut microbiota and respiratory infections, as well as asthma. Arrieta et al. (2015) found that infants at risk of asthma exhibited transient changes in their gut microbiome composition during the first 100 days of life, including decreased relative abundance of Lachnospira, Veillonella, Faecalibacterium, and Rothia. Supplementing these bacterial groups notably reduced airway inflammation in adult offspring of germ-free mice. Moreover, there was a significant negative correlation between Lachnospira and cellular counts in the bronchoalveolar lavage fluid (BALF) of mice, which strongly suggested that the presence of Lachnospira reduced lung inflammation. These findings indicate that those bacterial taxa may not only be correlated with asthma susceptibility but also play a causal role in preventing asthma development. Gut microbial dysbiosis has also been found in respiratory infections, especially in respiratory syncytial virus (RSV) infection and influenza virus infection (Wang et al., 2014;Groves et al., 2018). Resident microbes in the gut can use dietary fiber to produce short-chain fatty acids (SCFAs), thereby regulating the host immune response (Silva et al., 2020). Brestoff and Artis (2013) and Niu et al. (2023) suggest that SCFAs can play a pivotal role in regulating asthma by modulating the differentiation and function of immune cells. In addition, SCFAs can improve the production of interferons (IFNs) to enhance the respiratory immune system's ability to combat viral pathogens (as shown in Figure 1). Based on the role of respiratory infections in the pathogenesis of asthma, these results suggest that intestinal microbial dysbiosis may play a role in infection-induced and aggravated asthma, but the mechanism is still unclear. In addition, vitamin D levels were inversely associated with asthma severity, including hospitalizations for asthma with severe infections (Brehm et al., 2009). At the same time, there is a clear association between VD deficiency and viral respiratory infections, and there is an inverse association between vitamin D supplementation and rhinovirus infections (Eroglu et al., 2019;Jartti et al., 2021). Notably, Jones et al. (2013) showed that gut microbiota can alter gut vitamin D metabolism (VDM) and that probiotic supplements can affect VD levels in circulation. Therefore, gut microbial dysbiosis may lead to changes in vitamin D levels, affect the occurrence of respiratory viral infections, and then affect the severity of asthma. In this article, we review the gut microbiota-related potential mechanisms of how respiratory infections affect the development and exacerbation of asthma, including the relationship between the gut microbiota and respiratory viral infections and how the gut microbiota regulates the host immune response in the pathogenesis of asthma. . Relationship between gut microbiome and susceptibility to asthma, and characteristics of gut microbiota in asthma Generally, the gut microbiota of healthy adults consists mainly of Firmicutes (including Roseburia, Enterococcus, and Faecalibacterium) and Bacteroides (such as Bacteroides and Prevotella; Costea et al., 2018;Rinninella et al., 2019;Gomaa, 2020;Illiano et al., 2020). This diverse microbiota is rich in microorganisms that can ferment substances which is hard to decompose such as dietary fiber to produce SCFAs, including butyrate, propionate, and acetate. These SCFAs can bind to host-derived cytokines and chemokines, translocate through the blood and lymphatic systems, and influence lung immunity (Parada Venegas et al., 2019). Dysregulation of the gut microbiome and reduced metabolic capacity can impair local and pulmonary immunity, thereby increasing susceptibility to lung diseases such as asthma and respiratory viral infections. Epidemiological studies support the connection between gut microbiota dysbiosis in early life and an increased risk of childhood asthma, while early exposure to dogs, farm life, and microorganisms is negatively associated with the risk of wheezing disease (Lynch et al., 2014;Ludka-Gaulke et al., 2018). In fact, compared to non-asthmatic children, school-age children with asthma exhibited lower gut microbiome diversity before one month of age (Abrahamsson et al., 2014). Another study found significant differences in the relative abundance of gut bacteria between asthma and healthy children. Dialister, Faecalibacterium, and Roseburia of Firmicutes were significantly reduced in children with asthma (Chiu et al., 2019), which may increase susceptibility to viral infections and contribute to the development of asthma. The differences in gut microbiota between asthma and nonasthma include not only the diversity and relative abundance of intestinal microorganisms; but also the intestinal metabolites, such as amino acids, butyrate, and other SCFAs. These metabolites are produced by certain intestinal microorganisms. Bacteroidetes and Firmicutes are the dominant bacterial phyla-producing SCFAs, both of which can produce acetate. Butyrate is mainly produced by Ruminococcus, Clostridium, and Coprococcus in Firmicutes. Propionate is produced by Bacteroidetes such as Prevotella and Bacteroidetes. Chiu et al. (2019) discovered that butyrate in stool was decreased in children with asthma and negatively correlated with serum IgE levels. This suggests that reduced butyrate production and the resulting imbalance in intestinal epithelial barrier function may promote the uptake of allergens in allergic asthma. The study also found that children with asthma exhibited reduced populations of butyrate-producing bacteria (including Coprococcus and Roseburia) and increased Clostridium, which was negatively associated with butyrate. This suggests that butyrate reduction is associated with specific gut microbiota changes. Similar alterations in gut microbiota have also been observed in adults with asthma. Fedosenko et al. (2014), Masik et al. (2019), Zolnikova et al. (2020), and Ozimek et al. (2022), have observed changes in gut microbiota in adult patients with bronchial asthma, including an increase in the proportion of Proteobacteria and a . FIGURE Respiratory tract infections can alter the microecology of the intestines, leading to changes in the abundance of gut microbiome and a reduction in bacteria that produce short-chain fatty acids (SCFAs). So there is a decrease in SCFA production. The reduction in SCFA levels can impact the function and fate of various immune cells, including dendritic cells (DCs), regulatory T cells (Tregs), eosinophils, and type innate lymphoid cells (ILC s). Additionally, the SCFA receptor GPR can interact with the NOD-like receptor protein (NLRP ) to promote the aggregation and signal transduction of the mitochondrial antiviral signaling protein MAVS. When acetate binds to GPR , it enhances MAVS aggregation, activates downstream TBK /IRF , and promotes the production of interferon-I (IFN-I). IFN-I can increase the number of Th cells and eosinophils, which can lead to airway hyperresponsiveness in asthma. decrease in bacteria that can produce butyrate and acetate. The changes in gut microbiota in asthma mainly include the decrease of gut microbial diversity, the decrease of the relative abundance of beneficial microorganisms (such as Faecalibacterium), and the reduction or even depletion of SCFAs-producing bacteria. The gut microbiota has been shown to impact pulmonary diseases such as asthma via the lung-gut axis, which has led to the possibility of probiotic therapy. A recent study found that supplementing with Lactobacillus rhamnosus reduced the incidence of childhood asthma, likely due to the probiotics altering the composition of the microbiome and the levels of SCFAs (Cereta et al., 2021). In another study, researchers discovered that two types of Lactobacillus decreased airway inflammation in mice with HDM-induced allergic asthma, inhibited Th2 and Th17 immune responses, and promoted Treg responses. This was accompanied by changes in the heterogeneity, composition, and metabolism of the intestinal microbiome . It has been demonstrated that dietary fiber supplementation can have anti-inflammatory effects in asthma. And this is attributed to the increased production of SCFAs resulting from the fermentation of dietary fiber by gut bacteria (Trompette et al., 2014). Cait et al. (2017) found that the use of vancomycin reduced the intestinal flora that produces SCFAs, which increased the susceptibility of mice to OVA-induced asthma and papain-induced pulmonary inflammation. In comparison to the control group, supplementation with SCFAs was found to reduce the vancomycininduced enhancement of lung inflammation in asthmatic mice. These SCFAs can exhibit anti-inflammatory effects by activating free fatty acid receptors, such as G protein-coupled receptors 41 and 43 (GPR41 and GPR43), in patients with asthma and animal models of asthma (Halnes et al., 2017). Trompette et al. (2014) found that SCFAs like propionate could impair the ability of DC to activate Th2 effector cells in the lung, making allergic airway inflammation unsustainable. SCFAs can also regulate the function of various immune cells, such as type 2 innate lymphoid cells (ILC2; Thio et al., 2018), regulatory T cells (Tregs; Arpaia et al., 2013), and inhibit an increase of eosinophils (Theiler et al., 2019), thereby reducing airway hyperresponsiveness in asthma. Detailed mechanisms are described in Section 3 below. Additionally, SCFAs can prevent and restore impaired airway epithelial barrier function. Epithelial barrier dysfunction is a characteristic and pathological mechanism of airway inflammation in asthma patients (Georas and Rezaee, 2014). In comparison to non-asthmatic patients, the bronchial epithelial barrier function in asthmatic patients is impaired (Inoue et al., 2020), making it easier for pathogens or allergens to enter . /fmicb. . the airway tissue, activating an immune response, and leading to asthma exacerbation (da Silva, 2021). Richards et al. (2020) demonstrated that propionate and butyrate can enhance the barrier function of human bronchial epithelial cells and restore the impaired barrier function induced by stimulation with IL-4, IL-13, and dust mite extract. This is due to the increased expression of ZO-1, one of the tight junction proteins, by SCFA. In summary, studies have demonstrated that SCFAs exert a protective effect against asthma, indicating that a decrease in SCFAs may exacerbate asthma symptoms (Thio et al., 2018). . Changes of gut microbiome and its derivatives after pulmonary infection and the mechanism of these changes Groves et al. (2018), Lv et al. (2021), and Zhu et al. (2021) have shown that pulmonary viral infections can modify the composition of intestinal flora, although the underlying mechanism remains unclear. The changes in gut microbiome in mice with asthma after respiratory syncytial virus (RSV) infection were studied for the first time. The results showed that the Bacteroidetes increased and the Firmicutes decreased in the intestinal microbiota of mice after RSV infection (Lu et al., 2020). The loss of balance between the two is associated with many diseases or disorders (Turnbaugh et al., 2006). Additionally, a reduction in the Lactobacillus family was also found in this study. This is consistent with changes in the Lactobacillus family following influenza virus infection (Wang et al., 2014). Alterations in the gut microbiota following respiratory infections have also been observed in humans. A study on recurrent respiratory tract infections (RRTI) (Li et al., 2019) found that Verrucomicrobia and Tenericutes were significantly reduced in RRTI patients compared to healthy volunteers. A systematic review of 11 published studies investigating changes in the gut microbiome of patients with confirmed or suspected respiratory tract infections (Woodall et al., 2022) revealed that Firmicutes, Lachnospiraceae, Ruminococcaceae, and Ruminococcus were reduced, while Enterococcus was abundant compared to the healthy control group. Enterococcus has been identified as an opportunistic pathogen causing community-acquired and nosocomial infections (Ager and Gould, 2012;Yu et al., 2015). These altered intestinal bacteria can produce SCFAs through the fermentation of dietary fiber. The above studies show that respiratory infections may lead to a decrease in SCFAs. Evidence suggests that RTI-induced changes in the gut microbiome can lead to the depletion of beneficial gut bacteria that produce SCFAs, such as Lachnospiraceae, which belongs to Firmicutes and can ferment pectin and fructose to produce acetate, propionate, and ethanol (Boutard et al., 2014). Reduced levels of SCFAs affect the immune system and aggravate asthma (Trompette et al., 2014;Cait et al., 2017). SCFA can promote the production of interferon, thereby inhibiting respiratory virus replication and promoting virus clearance (Ji et al., 2021). Based on the multiple effects of SCFAs mentioned earlier in the article, it is suggested that a decrease in the beneficial SCFAs caused by respiratory infections may weaken their protective effect against asthma. In general, respiratory virus infections lead to changes in the composition of intestinal flora, including an increase in Bacteroidetes, a decrease in short-chain fatty acid-producing bacteria such as Firmicutes and Pilospirillum, and an increase in opportunistic pathogens such as Enterococcus (Boutard et al., 2014;Groves et al., 2018;Woodall et al., 2022). These changes are similar to the changes observed in the gut microbiota of individuals with asthma, indicating that an imbalance in gut microbiota may contribute to the development and exacerbation of asthma following respiratory infections. An animal study (Lu et al., 2020) showed that the gut microbiota of asthmatic mice infected with RSV was altered, and this was correlated with elevated Th2 cytokine levels and increased asthma severity. Furthermore, the gut microbiome can influence the development of asthma by affecting the early maturation of the immune system in response to viral infections, which may be linked to a reduction in SCFAs (Lynch et al., 2017). To summarize, the changes in gut microbiota following respiratory virus infections have implications for the pathogenesis and exacerbation of asthma and warrant further investigation. Although many studies have confirmed changes in the microbiome and SCFA levels following lung infection, the functional implications and underlying mechanisms of these changes are not yet clear. Both type I and II interferons (IFNs) have been implicated in the alteration of intestinal microbiota caused by influenza virus infection. Deriu et al. (2016) knockout the mouse IFN-I gene and found that influenza virus pulmonary infection can significantly alter the distribution of intestinal microbiota through IFN-I-dependent mechanism, leading to the consumption of anaerobic bacteria and the enrichment of proteobacteria in the intestine. It is important to note that Proteus expansion is almost always accompanied by a decrease in the abundance of butyrate-producing bacteria, which is characteristic of microbial dysregulation. Wang et al. (2014) found that influenza infection mediated changes in intestinal microbiome composition through IFN-γ production by pulmonary CCR9 + CD4 + T cells recruited in the small intestine. Another factor that may cause these changes is mucus. Mucus not only protects against exposure to microbes but also acts as an energy source for certain bacteria. If the mucus composition changes, microorganisms can benefit ecologically by utilizing it (Derrien et al., 2010). Usually, the mucin Muc5ac is expressed in the airway but not in the colon (Koeppen et al., 2013). Groves et al. (2018) observed elevated levels of Muc5ac in the colon of RSV-infected mice and suggested that Muc5ac could be linked to dysbiosis of intestinal flora. However, whether the increase in mucin is due to viral infection stimulating Muc5ac expression in colonic goblet cells remains to be investigated. Similar conditions have been observed elsewhere, where changes in the composition of the vaginal flora have been associated with increased levels of Muc5B and Muc5ac in the cervix (Borgdorff et al., 2016). However, the effect of mucus on gut flora has not been studied previously. Loss of appetite caused by infection is another explanation for the observed changes in microbiome composition after a lung infection. The similarity in gut microbial changes after RSV and influenza virus infections suggests the presence of common underlying mechanisms based on pathogen-specific immunity for viral infections. Weight loss is a common symptom in mice after both infections, associated with reduced food intake. Loss of . /fmicb. . appetite has also been reported in humans infected with influenza virus and respiratory syncytial virus (Monto et al., 2000;Widmer et al., 2012). Diet is the largest factor in microbiota composition, and studies have shown that reduced caloric intake in mice and humans is associated with significantly increased abundance ratios of Bacteroides and Firmicutes, which is consistent with changes in intestinal microbiota composition in mice and humans with respiratory infections. Another study by Groves et al. (2018) found that weight change after viral infection was not related to viral load but to CD8 + T cells. Depletion of CD8 + T cells during respiratory syncytial virus infection could reduce loss of appetite and reverse changes in gut microbiome. Although the mechanism of how CD8 + T cells affect appetite is still unclear, it is likely mediated by cytokines or chemokines produced by CD8 + , such as IL-6 and tumor necrosis factor-β (TNFβ). . The role of gut microbial changes caused by respiratory infection in the development and exacerbation of asthma . . The reduction of SCFAs can contribute to asthma exacerbations The above sections of the article suggest that respiratory infections can alter the gut microbiome, leading to a reduction in protective gut bacteria that produce SCFAs. This reduction in SCFA-producing bacteria may contribute to a diminished protective effect of SCFAs in asthma, potentially leading to the aggravation of symptoms. The role of SCFAs in the pathogenesis of asthma has been extensively studied in the context of the lung-gut axis, particularly in children. SCFAs have been shown to regulate the systemic immune response by activating GPCRs or inhibiting histone deacetylases (HDACs), leading to downstream signal transduction (as shown in Figure 2). In their study, Halnes et al. (2017) demonstrated that SCFAs can upregulate the gene expression of GPCRs, including GPR41 and GPR43, which can alleviate airway inflammation in asthma. Furthermore, SCFAs can inhibit HDACs, a class of proteases present in eukaryotic cells that play an important role in maintaining glucocorticoid sensitivity, regulating airway inflammation, airway remodeling, and airway hyperresponsiveness (AHR). Yip et al. (2021) have shown that inhibition of HDAC activity can effectively reduce respiratory diseases in mice. Butyrate, among SCFAs, is the most potent inhibitor of HDAC activity and can directly inhibit HDAC in a dose-dependent manner (Kaiko et al., 2016). SCFAs can also regulate the activity of ILC2, Tregs, and DCs, and inhibit eosinophilia through the signal transduction pathways mentioned above. Figure 3 illustrates various effects of SCFAs on immune cells in allergic asthma. These cells are closely associated with Th2-mediated immune responses, which are considered a core molecular mechanism of asthma. ILC2 can produce Th2type cytokines, such as IL-5 and IL-13, leading to eosinophilia and AHR. A research (Thio et al., 2018) has demonstrated that butyrate can play a crucial role in regulating the proliferation and function of ILC2. The administration of butyrate can inhibit the production of IL-13 and IL-5 and significantly alleviate ILC2driven AHR and inflammation. Moreover, Lewis et al. (2019) reported that butyrate can down-regulate the expression of GATA3, a key transcription factor involved in ILC2 development and function. This down-regulation effectively suppresses ILC2-driven AHR. SCFAs can induce the differentiation of Tregs by inhibiting HDACs and stimulating GCRP, including GPR43, GPR41, and GPR109a (Arpaia et al., 2013). This process may be related to the suppression of allergic airway diseases. Hu et al. (2021) have shown that supplementation with certain gut microbiota or SCFAs can increase the proportion of Tregs in house dust mite-induced allergic asthma. The CCL19/21 and CCR7 chemokine axes play a crucial role in mediating DCs migration from inflamed tissues to collected lymph nodes. DCs isolated from mice treated with butyrate respond less efficiently to CCL19 compared to the group administrated with vancomycin alone, indicating that butyrate attenuates CCL19-induced DCs migration (Cait et al., 2017). Additionally, vancomycin treatment significantly increases the number of DCs in the mediastinal lymph nodes of mice, while exogenous SCFAs reverse this increase and reduce the number of DCs to the control level. This suggests that SCFA exposure can alter the migration behavior of DCs. Furthermore, Kim and Lee (2017) have demonstrated that HDAC inhibitors (HDACis) such as trichostatin A (TSA), sodium butyrate, and scriptaid strongly inhibit the migratory activity of DCs. The inhibitory activity of HDACis is caused by a decrease in CCR1 expression on the cell surface and inhibition of phosphorylation of p38 mitogenactivated protein kinase (MAPK), extracellular signal-regulated kinases 1 and 2 (ERK1/2), and c-Jun N-terminal kinase (JNK). Numerous studies have shown that SCFAs can effectively inhibit HDACs, with butyrate being a known class I and II HDAC inhibitor (Hamminger et al., 2020), and propionate also having this inhibitory effect (Song et al., 2022). Studies have shown that butyrate can strongly inhibit HDAC in CD8 + T cells, and TSA has a similar effect on CD8 + T cells (Luu et al., 2018). Theiler et al. (2019) used experiments in vitro and in vivo to demonstrate that butyrate can reduce the adhesion, migration, and survival of human eosinophils, and these effects are independent of GPR41 and GPR43. Instead, they are accompanied by histone acetylation and can be mimicked by HDAC inhibitors. This suggests that butyrate can inhibit eosinophil accumulation, reduce the level of type 2 cytokines, improve AHR in asthmatic mice, and may have a causal relationship with HDAC inhibition. Overall, SCFAs can regulate the systemic immune response and reduce AHR in asthma by inhibiting the Th2 response through direct effects on ILC2, T cells, DCs, and eosinophils. FIGURE The gut microbiome has the ability to utilize dietary fiber to produce short-chain fatty acids (SCFAs) like acetate. These SCFAs can activate G protein-coupled receptors (GPCRs) such as GPR , GPR , and GPR a, and initiate signal transduction. SCFAs also exert significant inhibitory e ects on lysine/histone deacetylase (K/HDAC) activity. Through their HDAC inhibitory e ects and histone acetyltransferase (HAT) promoting e ects, SCFAs increase histone acetylation, which in turn promotes histone translational modifications and regulates systemic immune responses. . . Interferon deficiency regulated by gut microbiome may contribute to the exacerbation of asthma IFN is one of the inflammatory mediators induced after viral infection. It plays a crucial role in initiating a robust antiviral response, which includes impairing viral replication and transmission between cells and enhancing innate and adaptive immune responses against viruses and some bacteria. IFNs can be classified into three types: IFN-1 (IFN-α and IFN-β), IFN-2(IFNγ ), and IFN-3 (IFN-λ family, including IFN-λ1, IFN-λ2, and IFN-λ3). IFN-1 and IFN-3 are the primary IFN isoforms induced by pulmonary viral infections. In contrast, IFN-2 (IFN-γ ) is typically induced by cytokines and primarily expressed by immune cells such as T cells, NK cells, or monocytes, rather than in response to viral induction (Bar Yoseph et al., 2015). Previous studies have found that patients with asthma may have impaired production of type I and III IFNs, which could increase their susceptibility to viral infection (Confino-Cohen et al., 2014). Plasmacytoid dendritic cells (pDCs) are the primary cells responsible for producing IFN-α against viruses. Lin et al. (2020) found that after RV stimulation, interferon-α production by pDCs was significantly increased in normal subjects but not in asthmatic patients. This provides further evidence of a defective innate response to RV infection in asthmatic patients. Moreover, evidence shows that both asthmatic patients regardless of atopic status, and atopic patients without asthma have an insufficient interferon response to RV infection during childhood (Baraldo et al., 2012). However, Sykes et al. (2013) specifically selected subjects with . /fmicb. . FIGURE SCFAs have a broad spectrum of e ects on immune cells involved in allergic asthma, which is a complex inflammatory disease. Exposure to allergens can result in eosinophilia, airway hyperresponsiveness, and goblet cell hyperplasia, all of which are driven by a combination of dendritic cells (DCs), Th cells, ILC s, and eosinophils. SCFAs can inhibit DC activation and migration, which can stimulate the di erentiation of immature/naive CD + T cells into the Th lineage. Additionally, SCFAs reduce the responsiveness of DCs to CCL , inhibiting the migration of DCs. SCFAs can promote the transformation of naive CD + T cells from Th to regulatory T cells (Tregs) and inhibit ILC secretion of IL-and IL-cytokines. IL-can activate eosinophils, and IL-can stimulate airway epithelial cells to produce mucus and aggravate airway hyperresponsiveness. Furthermore, SCFAs can down-regulate the expression of GATA , a key transcription factor involved in ILC development and function. SCFAs can also decrease the adhesion, migration, and survival of human eosinophils. Overall, SCFAs play a significant role in regulating the di erentiation and function of various immune cells, and thus have the potential to improve allergic asthma. well-controlled asthma in another study and did not observe any defects in the induction of IFN by RV. Although these results show some variability, most studies suggest that patients with asthma have an insufficient interferon response. Interestingly, type I IFN production has been shown to be promoted by SCFAs, which are typically reduced in respiratory infections and asthma. This suggests that the impaired expression of type I IFN in asthma may be associated with a reduction in SCFAs. . . . The mechanisms by which SCFAs regulate the generation of interferons Growing evidence suggests intestinal malnutrition can impact the immune response of the lungs (Kulas et al., 2019). Additionally, changes in gut microbes reported in respiratory viral and bacterial infections have been linked to immune inflammation in the lungs. Studies have shown that lung infections can cause inflammatory cell infiltration and increased levels of intestinal IFN-γ and interleukin 17 (IL-17). Furthermore, a decrease in the diversity of intestinal bacterial microbiota has been observed early in the infection (Kulas et al., 2019). These findings suggest that infections can disrupt the balance of gut microbes, leading to impaired interferon production and asthma exacerbation. Increased production of IFNs can enhance the body's ability to prevent respiratory virus infections and clear viruses, thereby reducing the exacerbation of asthma caused by viruses. Studies have shown that improving intestinal microecology can reduce viral load, increase the expression of IFN-α, IFN-γ , and IL-1β, and reduce TNF-α . The specific mechanism underlying these effects is still unclear, but it may be related to the role of SCFAs in modulating the immune response. A study (Ji et al., 2021) demonstrated that administering probiotics can significantly increase the abundance of SCFAs-producing bacteria in mice infected with RSV, resulting in higher levels of SCFAs in the serum. Furthermore, it was found that acetate can increase IFN-β production in alveolar macrophages (AMs), indicating that probiotics can prevent RSV infection in neonatal mice through the microbiota-AM axis. This highlights the potential of probiotics in . /fmicb. . modulating the microbiota-SCFAs-AM-IFN axis to enhance the body's ability to prevent and fight viral infections. In a recent study (Niu et al., 2023), it was discovered that acetate produced by Bifidobacterium pseudolongum NjM1 can promote the production of IFN-I by macrophages, which plays a protective role against viral infection. Mechanistically, the acetate receptor GPR43 interacts with the nucleotide-binding oligomerization domainlike receptor protein 3 (NLRP3) to promote the aggregation and signal transduction of the mitochondrial antiviral signaling protein (MAVS). The binding of acetate to GPR43 enhances MAVS aggregation, and downstream activation of TANK binding kinase 1 (TBK1) / interferon regulatory factors (IRF3) leads to the production of IFN-I. Furthermore, acetate can reduce the severity of RSV infection and the viral load by regulating the expression of retinoic acid-inducible gene-I (RIG-I). These findings suggest that acetate produced by certain probiotics can modulate the immune response to enhance the body's ability to prevent and fight viral infections. . . . E ects of interferon on respiratory viral infections and asthma Sumino et al. (2012) has demonstrated that reduced antiviral interferon response at birth among infants at high risk of asthma and allergic diseases is significantly correlated with increased acute respiratory infections in the following year. Furthermore, research by Rich et al. (2020) indicated that impaired production of type I or type III IFN might be linked to infection-induced asthma exacerbations. Bergauer et al. (2017) confirmed previous reports of impaired interferon response in stable asthma and found that IFN secretion was overactive during asthma exacerbations associated with rhinovirus infection in children. The authors of the paper suggested that although asthmatic children may have impaired immune conditions under normal circumstances, their ability to upregulate IFNs during the acute phase is preserved. Gonzales-van Horn and Farrar (2015) demonstrated that primary human bronchial epithelial cells from asthmatic patients produced lower levels of IFN-λ in response to experimental RV challenge compared to healthy controls. This impaired interferon secretion was associated with an increased incidence of virus-induced asthma exacerbations. Therefore, it is crucial to conduct further studies to determine whether the impaired interferon secretion observed during virus-induced acute asthma attacks is a general phenomenon or if it is limited only to steady-state conditions. IFN-α has been shown to reduce the number of infiltrating eosinophils in the airway tissues of asthmatic patients in a dosedependent manner (Kikkawa et al., 2012). Similarly, type III IFN was found to have a comparable effect. When the level of type III IFN was higher, it was associated with a lower number of eosinophils in sputum, lower levels of IL-8 in BALF, and reduced viral load in patients with respiratory infections (Contoli et al., 2006). Furthermore, there is evidence that type I and type III IFNs can inhibit GATA3 expression, restrict the development of Th2 cells and reduce the secretion of type 2 cytokines (Krammer et al., 2021). The presence and magnitude of type 2 immune responses, characterized by eosinophilia and elevated levels of proinflammatory cytokines such as IL-4, IL-5 and IL-13, are believed to be key mechanisms in the development of asthma. On the other hand, type IFN-1 has been shown to negatively regulate the development and activation of human and mouse Th17 cells. Additionally, interferon indirectly impacts the immune response through the IFN-λ-TSLP axis. Thymic stromal lymphopoietin (TSLP) is an adaptive immunomodulator produced by epithelial cells, which can be induced by IFN-III in mice infected with live attenuated influenza virus. TSLP is an important alarm protein for allergic reactions associated with Th2 inflammation. TSLP has been shown to act as a mediator of eosinophilia in viral infections in vitro (Lee et al., 2012). However, its role in eosinophilia in vivo has not been well-documented and requires further investigation through additional studies. Studies have demonstrated that the IFN-γ response is associated with total BALF infiltration and that decreasing IFNγ production can alleviate airway inflammation and AHR (Zang et al., 2011;Kumar et al., 2012). In addition, IFN-γ can stimulate macrophages to produce IL-27, leading to steroid-resistant airway inflammation and AHR (Kumar et al., 2012). This suggests that targeting the effects of IFN-γ on pulmonary macrophages may be particularly relevant for treating acute exacerbations of steroid-resistant asthma. It is noteworthy that decreased intestinal microbial diversity can lead to increased IFN-γ production, which not only affects the AHR of asthma but also increases the mortality of respiratory viral infections (Grayson et al., 2018). . . The impact of vitamin D deficiency, which can be modulated by gut microbes, on respiratory viral infections and asthma . . . Bidirectional regulation between gut microbiome and vitamin D metabolism Vitamin D (VD) and gut microbiome seem to affect the immune system in respiratory diseases in a variety of similar ways. There may be some interaction and/or synergy between them (Murdaca et al., 2021). Shang and Sun (2017) and Bora et al. (2018) have shown that gut microbiome can alter gut vitamin D metabolism (VDM) and that probiotic supplements can affect circulating VD levels. Meanwhile, people with higher 1,25(OH)2D levels were more likely to have a favorable gut microbiota, especially more butyrate-producing bacteria. There is a bidirectional signal between bacteria and colonic epithelium, and the gut microbiome can also directly affect VDM (Waterhouse et al., 2019). Notably, VD deficiency affects both the occurrence of respiratory infections and the exacerbation of asthma (Confino-Cohen et al., 2014;Ganmaa et al., 2022). Figure 4 summarizes the effect of vitamin deficiency on the increased risk and exacerbation of asthma. This suggests that gut microbiota may play a more complex role in infection and asthma exacerbation by altering VD levels. . . . Vitamin D can reduce the risk of asthma occurrence and exacerbation by anti-viral infection Epidemiological studies have demonstrated a significant link between vitamin D deficiency and viral respiratory infections. FIGURE The gut microbiome can be bidirectionally regulated with vitamin D, and supplementation of probiotics can improve vitamin D deficiency in patients with asthma. Vitamin D deficiency has been found to increase the risk of asthma exacerbations through various pathways. A decrease in vitamin D can increase airway hyperresponsiveness and promote airway remodeling. Moreover, a decrease in vitamin D can reduce the body's antiviral ability, making it more susceptible to severe respiratory virus infections, thereby further increasing the risk of asthma and asthma exacerbation. Vitamin D can play a protective role in asthma by reducing oxidative capacity in serum and lung tissue, improving antioxidant capacity, reducing the formation of nitric oxide in serum, and decreasing the expression of nuclear factor kappa B in the lung. Vitamin D deficiency can also induce a shift in the balance between Th -type and Th -type cytokines toward Th dominance. Vitamin D deficiency can increase the risk of asthma, which is related to its ability to regulate B cell activity and a ect IgE production. A cohort study involving children from various countries found that vitamin D supplementation was inversely related to RV infection (Jartti et al., 2021). Furthermore, children with lower respiratory tract infections had notably lower average vitamin D levels than the controls. The incidence and severity of lower respiratory tract infections were also correlated with vitamin D levels (Şişmanlar et al., 2016). Anitua et al. (2022) found that vitamin D supplementation had a minor but positive effect in lowering the risk of one or more acute respiratory syndromes when compared to placebos. Toll-like receptor 3 (TLR3) is a viral double-stranded RNA recognition receptor that, in viral infections, can exacerbate diseases by inducing tissue damage and facilitating viral dissemination . Excessive TLR3-mediated inflammation may play a key role in promoting asthma exacerbation and fibrosis, with bronchial smooth muscle cells (BSMCs) being one of the major contributors to asthma airway remodeling (Yang et al., 2017). Plesa et al. (2021) indicate that vitamin D3 can attenuate TLR3 agonist-induced inflammation and fibrotic responses in BSMCs, exerting a protective effect in the development of worsened conditions associated with viral infections. These results support vitamin D3 supplementation in virus-infected asthma patients. This study also found that BSMCs from asthmatic patients were more sensitive to TLR3 agonist (polyI:C) stimulation than those from control subjects. It may indicate a greater propensity for severe reactions in virus-infected asthma patients. In primary bronchial epithelial cells (PBECs) treated with VD, a reduced ability of RV replication and infectivity was observed, suggesting an antiviral effect of VD (Telcian et al., 2017). The effect of VD on respiratory viral infection may be influenced by various factors (such as the frequency, dosage, and duration of vitamin D supplementation; Jartti et al., 2021). Based on the evidence presented above, it can be inferred that vitamin D levels may play a potential role in the association between viral infections and asthma. . . . Other e ects of vitamin D deficiency on asthma exacerbations VD deficiency is strongly associated with the exacerbation of asthma in both children and adults, as it can influence both innate and adaptive immune responses (Confino-Cohen et al., 2014;Ozkars et al., 2019). VD deficiency may promote the development of AHR and airflow limitation in asthma patients, leading to worsened symptoms (Alyasin et al., 2011;Ozkars et al., 2019). In mouse models, VD deficiency is associated with increased AHR, elevated pro-inflammatory cytokines (IL-10 and TGF-β) in BALF, and reduced Treg cells (Agrawal et al., 2013). Supplementation with vitamin D can reduce inflammation and AHR induced by allergens in sensitized mice (Agrawal et al., 2013;Fischer et al., 2016). In vitro, Camargo (2018) have shown that VD deficiency . /fmicb. . can shift the balance between Th1 and Th2 cytokines toward a Th2-dominant response. de Groot et al. (2015) have also demonstrated that VD supplementation can reduce eosinophilic airway inflammation in non-allergic asthma patients, possibly by restoring the production of IL-10. It is worth noting that there is a U-shaped relationship between serum VD levels and asthma, where both VD deficiency and high VD levels increase the risk of asthma, which may be related to VD's regulation of B cell activity and IgE production (Douros et al., 2017). In asthma, there is an imbalance between the lung's protective antioxidant system and the generation of oxidative species, leading to an increase in oxidative products and activation of intracellular signaling mechanisms of oxidative/reductive reactions (Chamitava et al., 2020). One such mechanism involves the activation of the nuclear factor kappa B (NF-κB) pathway, which further increases the production of inflammatory molecules (Kirkham and Rahman, 2006). Studies in mice have shown that vitamin D can protect against asthma by reducing oxidative stress in the serum and lung tissue. It also can increase antioxidant capacity, decreasing the formation of nitric oxide in the serum, and reducing the expression of NF-κB p65 in the lungs (Adam-Bonci et al., 2021). It has been reported (Telcian et al., 2017) that respiratory virus infections can lead to down-regulation of VDR mRNA expression in bronchial epithelial cells. Considering the role of VD deficiency in exacerbating asthma, the decrease in VD receptor expression induced by viral infections may result in a reduction of the protective effect of VD on asthma. . Conclusion The regulation of host immunity is highly dependent on gut microbiome. A plethora of data supports the notion that gut microbial dysbiosis may contribute to the development of respiratory infections and asthma. This is particularly relevant in the case of children, where microbial changes in early life can have long-lasting effects on immune system maturation and overall health, ultimately increasing the risk of asthma onset and exacerbation. Respiratory viral infections have been identified as a risk factor for asthma occurrence and exacerbation, and the complex underlying mechanisms warrant further investigation. We postulate that dysbiosis of the gut microbiota may serve as a bridging factor in the development of asthma triggered by respiratory infections. Restoring the homeostasis of the gut microbiome could potentially become a viable strategy for preventing childhood asthma. Author contributions Collection and analysis were performed by YYa, HZ, and SS. The first draft of the manuscript was written by XZ and MH. The conception of the study was put forward by ZX and YYo firstly. All authors contributed to the study's conception and design, commented on previous versions of the manuscript, read, and approved the final manuscript. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-07-29T15:10:51.692Z
2023-07-27T00:00:00.000
{ "year": 2023, "sha1": "144f80389bb17c43a99686f53e74dc9cad4db38c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1219942/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e17ba6315d1a42cb186125b425b068ca8625e28", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255975612
pes2o/s2orc
v3-fos-license
Description of a new species of Aplectana (Nematoda: Ascaridomorpha: Cosmocercidae) using an integrative approach and preliminary phylogenetic study of Cosmocercidae and related taxa Nematodes of the family Cosmocercidae (Ascaridomorpha: Cosmocercoidea) are mainly parasitic in the digestive tract of various amphibians and reptiles worldwide. However, our knowledge of the molecular phylogeny of the Cosmocercidae is still far from comprehensive. The phylogenetic relationships between Cosmocercidae and the other two families, Atractidae and Kathlaniidae, in the superfamily Cosmocercoidea are still under debate. Moreover, the systematic position of some genera within Cosmocercidae remains unclear. Nematodes collected from Polypedates megacephalus (Hallowell) (Anura: Rhacophoridae) were identified using morphological (light and scanning electron microscopy) and molecular methods [sequencing the small ribosomal DNA (18S), internal transcribed spacer 1 (ITS-1), large ribosomal DNA (28S) and mitochondrial cytochrome c oxidase subunit 1 (cox1) target regions]. Phylogenetic analyses of cosmocercoid nematodes using 18S + 28S sequence data were performed to clarify the phylogenetic relationships of the Cosmocercidae, Atractidae and Kathlaniidae in the Cosmocercoidea and the systematic position of the genus Aplectana in Cosmocercidae. Morphological and genetic evidence supported the hypothesis that the nematode specimens collected from P. megacephalus represent a new species of Aplectana (Cosmocercoidea: Cosmocercidae). Our phylogenetic results revealed that the Cosmocercidae is a monophyletic group, but not the basal group in Cosmocercoidea as in the traditional classification. The Kathlaniidae is a paraphyletic group because the subfamily Cruziinae within Kathlaniidae (including only the genus Cruzia) formed a seperate lineage. Phylogenetic analyses also showed that the genus Aplectana has a closer relationship to the genus Cosmocerca in Cosmocercidae. Our phylogenetic results suggested that the subfamily Cruziinae should be moved from the hitherto-defined family Kathlaniidae and elevated as a separate family, and the genus Cosmocerca is closely related to the genus Aplectana in the family Cosmocercidae. The present study provided a basic molecular phylogenetic framework for the superfamily Cosmocercoidea based on 18S + 28S sequence data for the first time to our knowledge. Moreover, a new species, A. xishuangbannaensis n. sp., was described using integrative approach. Background The superfamily Cosmocercoidea is a group of zooparasitic nematodes and currently comprises three families, namely, Atractidae Railliet, 1917, Cosmocercidae Railliet, 1916, and Kathlaniidae Lane, 1914 [1][2][3]. Among them, Cosmocercidae is the largest family, including approximately 200 nominal species, which are mainly parasitic in the digestive tract of various amphibians and reptiles worldwide [4][5][6]. The evolutionary relationships of the Cosmocercidae and the other two families are not yet resolved. Based on morphological and ecological traits, some previous studies [1,6,7] considered that the Cosmocercidae represents the ancestral group in Cosmocercoidea. The present knowledge of the molecular phylogeny of Cosmocercoidea/Cosmocercidae is still very limited. To date, several studies [8][9][10][11] have provided molecular phylogenetic analyses to solve the systematic status of some genera in Cosmocercoidea using different genetic data. However, due to the paucity and inaccessibility of suitable material of Cosmocercoidea/Cosmocercidae for genetic analysis, all of these molecular phylogenetic studies have included only small numbers of representatives of these taxa. To clarify the phylogenetic relationships of the Cosmocercidae and the other families Atractidae and Kathlaniidae in Cosmocercoidea, and the systematic position of the genus Aplectana in Cosmocercidae, phylogenetic analyses including the most comprehensive taxon sampling of Cosmocercoidea to date were performed using maximum likelihood (ML) inference and Bayesian inference (BI) based on 18S + 28S sequence data. Moreover, a new species of Aplectana was described using an integrative approach. Parasite collection A total of 91 Polypedates megacephalus (Hallowell) (Anura: Rhacophoridae) collected in the XiShuangBanNa Tropical Botanical Garden, Yunnan Province, China, were investigated for nematode parasites. Nematode specimens were isolated from the intestine of this host and then fixed and stored in 80% ethanol until study. Morphological observations For light microscopical studies, nematodes were cleared in lactophenol. Drawings were made using a Nikon microscope drawing attachment. For scanning electron microscopy (SEM), the anterior and posterior ends of nematodes were re-fixed in 4% formaldehyde solution, post-fixed in 1% OsO4, dehydrated via an ethanol series and acetone, and then critical point dried. Samples were coated with gold and examined using a Hitachi S-4800 scanning electron microscope at an accelerating voltage of 20 kV. Measurements (the range, followed by the mean in parentheses) are given in micrometers (μm) unless otherwise stated. Type specimens were deposited in the College of Life Sciences, Hebei Normal University, Hebei Province, P.R. China. Molecular procedures Genomic DNA from each sample was extracted using a Column Genomic DNA Isolation Kit (Shanghai Sangon, China) according to the manufacturer's instructions. The partial 18S region was amplified by polymerase chain reaction (PCR) using the forward primer 18S-F (5′-CGC GAA TRG CTC ATT ACA ACAGC-3′) and the reverse primer 18S-R (5′-GGG CGG TAT CTG ATC GCC -3′) [12]. The partial 28S region of nuclear rDNA was amplified by PCR using the forward primer 28S-F (5′-AGC GGA GGA AAA GAA ACT AA-3′) and the reverse primer 28S-R (5′-ATC CGT GTT TCA AGA CGG G-3′) [13]. The ITS-1 region of nuclear rDNA was amplified by PCR using the forward primer SS1 (5′-GTT TCC GTA GGT GAA CCT GCG-3′) and the reverse primer SS2R (5′-AGT GCT CAA TGT GTC TGC AA-3′) [14]. The partial cox1 region was amplified by PCR using the forward primer COIF (5′-TTT TTT GGT CAT CCT GAG GTT TAT -3′) and the reverse primer COIR (5′-ACA TAA TGA AAA TGA CTA ACAAC-3′) [15]. The cycling conditions were described by the previous study [9]. PCR products were checked on GoldView-stained 1.5% agarose gels and purified with the Column PCR Product Purification Kit (Shanghai Sangon, China). Sequencing was carried out using a DyeDeoxy Terminator Cycle Sequencing Kit (v.2, Applied Biosystems, Foster City, CA, USA) and an automated sequencer (ABI-PRISM 377). Sequencing of each sample was carried out on both strands. Sequences were aligned using ClustalW2. The DNA sequences obtained herein were deposited in the National Center for Biotechnology Information (NCBI) database (http:// www. ncbi. nlm. nih. gov) and compared (using the BLASTn algorithm) with those available in the GenBank database. for the superfamily Cosmocercoidea based on 18S + 28S sequence data for the first time to our knowledge. Moreover, a new species, A. xishuangbannaensis n. sp., was described using integrative approach. Phylogenetic analyses Phylogenetic trees were constructed based on the 18S + 28S sequence data using maximum likelihood (ML) in IQ-TREE and Bayesian inference (BI) in MrBayes 3.2 [16,17]. Ascaris lumbricoides Linnaeus, 1758 (Ascaridida: Ascaridoidea) was used as the outgroup. The ingroup included 16 cosmocercoid species belonging to 8 genera in 3 different families: Cosmocercidae, Atractidae and Kathlaniidae. The detailed information of nematode species included in the phylogenetic analyses, is provided in Table 1. We used a built-in function in IQ-TREE to select a best-fitting substitution model for the sequences according to the Bayesian information criterion [18]. The TIM3e + G4 model for 18S + 28S sequence data were identified as the optimal nucleotide substitution model. Reliabilities for the ML tree were tested using 1000 bootstrap replications, and the BI tree was tested using 50 million generations, and bootstrap values exceeding 70% were shown in the phylogenetic tree. Etymology: The specific epithet refers to the type location XiShuangBanNa Tropical Botanical Garden, Yunnan Province, China. Partial cox1 region Three cox1 sequences of A. xishuangbannaensis n. sp. Phylogenetic analyses Phylogenetic trees inferred from maximum likelihood (ML) and Bayesian inference (BI) showed that representatives of Cosmocercoidea were divided into four major clades (Fig. 3). Clade I included the species of three genera Cosmocerca, Cosmocercoides and Aplectana, representing the family Cosmocercidae. Among the three genera, Cosmocerca displayed a closer relationship to Aplectana rather than Cosmocercoides. Clade II included only Cruzia americana (a common nematode parasite in the digestive tract of opossums), which belongs to the subfamily Cruzinae in the family Kathlaniidae according to the current classification [1]. Clade III included species of Falcaustra and Megalobatrachonema, which represent the family Kathlaniidae. The representatives of Orientatractis and Rondonia formed Clade IV, representing the family Atractidae. Discussion The genus Aplectana (Cosmocercoidea: Cosmocercidae) is a group of zooparasitic nematodes, with approximately 50 nominal species mainly parasitic in various amphibians, and rarely occurring in reptiles worldwide [4,5,[20][21][22]. The absence of rosette papillae or plectanes in males and presence of somatic papillae, lateral alae and two prodelphic ovaries, uteri containing numerous eggs of normal size in females, allocate the present specimens to the genus Aplectana. To date, only four species of Aplectana have been reported in China, namely A. hainanensis Bursey, Goldberg & Grismer, 2018, A. hylae Wang, 1980, A. macintoshii (Stewart, 1914 and A. paucipapillosa Wang, 1980 [22-24]. Lacking a gubernaculum, the new species can be easily distinguished from the four above-mentioned species (the four species all possessing a gubernaculum) [20,22,23]. Aplectana xishuangbannaensis n. sp. differs from A. dubrajpuri and A. meridionalis in the different position of the excretory pore (situated at anterior end of oesophageal bulb vs at 1/2 between nerve ring and oesophageal bulb in the latter two species). With only one pair of precloacal papillae, A. tarija, which has six pairs of precloacal papillae, can be easily differentiated from the new species. Currently, the specific diagnosis of Aplectana spp. remains based on morphology, and the genetic data of these parasites are severely limited. Based on the genetic analysis of A. xishuangbannaensis n. sp., no intraspecific nucleotide differences in 18S, ITS-1, 28S and cox1 regions among different individuals were noted, but a high level of interspecific genetic variation in these regions among species of the other genera in the Cosmocercidae was clear. Our phylogenetic results are largely congruent with the traditional classifications of the Cosmocercoidea, which have been proposed based on morphological characters and ecological traits, including the structure of the oesophagus, the presence or absence of a precloacal sucker, the morphology of caudal papillae, the morphology of female reproductive organs and the reporductive strategies [1,2,36]. The systematic position of the subfamily Cruziinae has long been under debate. Our molecular phylogenetic results conflicted with the traditional classfication [1,5,[40][41][42], which suggested that the subfamily Cruziinae should be moved out from the hitherto-defined family Kathlaniidae and elevated to a separate family. The highly specialized structure of the pharynx (the presence of unique pharyngeal lamellae) and the unique digestive system (the presence of an intestinal caecum) of this group support its full family status [43]. However, a more rigorous molecular phylogenetic study with broader representatives of the Cruziinae using different nuclear and/ or mitochondrial genetic markers is required to further ascertain its systematic position. The Cosmocercidae currently includes about 200 nominal species allocated in more than 20 genera, representing the largest family within Cosmocercoidea [1,3,21,44]. However, the phylogenetic relationships among genera within Cosmocercidae is poorly understood because of the lack of genetic data. According to Chabaud (1978) [1] and Gibbons (2010) [44], the morphology of caudal papillae in males is one of the most important characters for generic diagnosis in the Cosmocercidae. Species of the genus Aplectana have no modified papillae (plectanes and/or rosette papillae), but those of Cosmocerca and Cosmocercoides have this character. Wilkie (1930) [45], Skrjabin et al. (1961) [5] and Chabaud (1978) [1] considered these genera with modified papillae more closely related to each other than Aplectana. However, our results indicated that Cosmocerca is closer to Aplectana rather than Cosmocercoides, conflicting with the traditional systematics based on morphology.
2023-01-19T22:22:50.541Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "3ff307ebbcb0e7fe203ff1bc1e12555f658005e4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13071-021-04667-9", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "3ff307ebbcb0e7fe203ff1bc1e12555f658005e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
239475608
pes2o/s2orc
v3-fos-license
Corrosion Properties of Cr27Fe24Co18Ni26Nb5 Alloy in 1 N Sulfuric Acid and 1 N Hydrochloric Acid Solutions The composition of the Cr27Fe24Co18Ni26Nb5 high-entropy alloy was selected from the FCC phase in a CrFeCoNiNb alloy. The alloy was melted in an argon atmosphere arc-furnace, followed by annealing in an air furnace. The dendrites of the alloy were in the FCC phase, and the eutectic interdendrites of the alloy comprised HCP and FCC phases. The microstructures and hardness of this alloy were examined; the results indicated that this alloy was very stable. This microstructure and hardness of the alloy almost remained the same after annealing at 1000 °C for 24 h. The polarization behaviors of Cr27Fe24Co18Ni26Nb5 alloy in 1 N sulfuric acid and 1 N hydrochloric acid solutions were measured. Both the corrosion potential and the corrosion current density of the Cr27Fe24Co18Ni26Nb5 alloy increased with increasing test temperatures. The activation energies of the Cr27Fe24Co18Ni26Nb5 alloy in these two solutions were also calculated. Introduction The high-entropy alloy concept [1,2] is now well known for designing new alloys. The four major characteristics of the high-entropy alloy concept are high entropy, sluggish diffusion, severe lattice distortion, and cocktail effects [3]. Materials researchers use this concept to smartly select appropriate elements to develop alloys for application fields [4]. This high-entropy alloy concept is also used to improve the corrosion resistance of metals, which is a very important issue. For example, the Cu 0.5 NiAlCoCrFeSi alloy had lower corrosion current densities than 304 stainless steel in H 2 SO 4 and NaCl solutions [5], but the pitting potential of Cu 0.5 NiAlCoCrFeSi alloy was lower than that of 304 stainless steel in a NaCl solution. The CoCrFeNiTi alloy had a better pitting-corrosion potential than conventional corrosion-resistant alloys, such as Ni-based superalloys duplex stainless steels, in corrosion environments [6]. The FeCoNiNb and FeCoNiNb 0.5 Mo 0.5 alloys had dual-phased dendritic microstructures, but they exhibited good corrosion resistance in comparison to 304 stainless steel in 1 M nitric acid and 1 M NaCl solutions [7]. The CrFeCoNiSn alloy also exhibited a good corrosion resistance with respect to 304 stainless steel in 0.6 M NaCl solution [8]. Moreover, the Al x CoCrFeNi (x = 0.15 and 0.4) high-entropy alloys exhibited better thermal stability and corrosion resistance compared to HR3C steel in a high-temperature and high-pressure environment [9]. The passivating elements, such as chromium, nickel, and molybdenum, are usually selected to be the elements of coating materials. Laser cladding, sputter deposition, and electro-spark deposition are frequently used processes. Al 0.5 CoCrCuFeNi alloy coating the surface of AZ91D by laser cladding successfully improved the corrosion resistance in 3.5 wt.% sodium chloride solution [10]. NbTiAlSiZrN x alloy thin films sputtered on 304 stainless steel exhibited good corrosion resistance in 1 N H 2 SO 4 solution [11]. Coatings of AlCr x NiCu 0.5 Mo (x = 0, 0.5, 1.0, 1.5, 2.0) alloys significantly improved the corrosion resistance of Q235 steel in 3.5% NaCl solution and a salt spray corrosion environment [12]. Cold working and annealing could also influence the electrochemical properties of 316 stainless steel, leading to an increase and a decrease in breakdown potential, respectively [13]. Shi corrosion-resistant properties of high-entropy alloys in different solutions and corrosion environments and pointed out the methods for improving corrosion resistance [14]. In our previous study on CrFeCoNiNb alloys [15], the microstructures and corrosion behaviors of a CrFeCoNiNb alloy were investigated. The composition of Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was selected from the FCC phase of the CrFeCoNiNb alloy. The microstructures, annealing effect, and polarization behaviors of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy were investigated in the present work. Materials and Methods The nominal compositions of Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in weight percentage were Cr 24.22%, Fe 23.13%, Co 18.30%, Ni 26.34%, and Nb 8.01%. The alloy was melted in an arc-furnace in argon atmosphere. The total weight of the alloy was about 120 g. Part of the alloy was annealed at 1000 • C for different times. A scanning electron microscope was used to observe the microstructures of the alloy. An X-ray diffractometer was used to examine the crystal structures of the alloy; the scanning rate was 0.04 • /s and the scanning range was 20-100 • . A Vickers hardness tester was used to measure the hardness of the alloy; the loading force was 19.62 N (2 kg). The polarization behaviors of the alloy were measured using an electrochemical analyzer. A saturated silver chloride electrode (Ag/AgCl, SSE) was used as the reference electrode, and its potential was 0.197 V higher than that of the standard hydrogen electrode (SHE) at 25 • C [16]. A platinum wire was used as the counter electrode. The exposed area of the specimens was fixed at 0.1964 cm 2 (diameter was 0.5 cm), and all of the specimens were wet-polished using 1200 grit SiC paper. The scanning rate of the polarization test was 0.001 V/s. Bubbling nitrogen gas was used to deaerate the oxygen in solutions during the polarization test. The polarization test was conducted for 900 s. Figure 1 shows the micrographs of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in as-cast and asannealed states. The microstructures of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloys were dendritic in both the as-cast and the as-annealed states. The dendrites of the as-cast Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy were in the FCC phase, whereas the interdendritic regions exhibited a dual-phased (FCC and HCP) eutectic structure, as shown in Figure 1a. The HCP phase was significantly spheroidized and coarsened after annealing, as shown in Figure 1b, resulting in a reduction in both the interphase area and the free energy of the alloy. Table 1 lists the chemical compositions of the overall, FCC, and HCP phases in the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in atomic percentage. The overall compositions of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy matched the theoretical values. The FCC phase in the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy had less Nb and Co, but more Cr, Fe, and Ni. A possible reason was that the elements of niobium and cobalt potentially formed a melt with a low melting point because of a eutectic reaction, thus forming the HCP phase in the interdendrites of the alloy during casting, featuring more Nb and Co. The Co-Ni binary phase diagram [17] shows that the melting point of a Co-20.3Nb alloy is only 1237 • C. The Cr27Fe24Co18Ni26Nb5 alloy was based on the compositions in the FCC pha CrFeCoNiNb alloy [15]. However, the Cr27Fe24Co18Ni26Nb5 alloy was a dual-phased a because the solid solubility of Nb in the alloy could change its composition. Accordi our previous studies on CrFeCoNiNbx alloys [15,18], the Nb content in the FCC pha CrFeCoNiNbx alloys changes with the Nb content(x) , as shown in Figure 2. The Nb tent in the FCC phase of the alloys increases with the Nb content, becoming almost rated at x = 0.6. In the present study, the Nb content of the alloy was only 5 at.%; thu solid solubility in the FCC phase was reduced, again forming the HCP phase in Cr27Fe24Co18Ni26Nb5 alloy. However, the Cr27Fe24Co18Ni26Nb5 alloy is a five-element a not a binary alloy. The Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was based on the compositions in the FCC phase of CrFeCoNiNb alloy [15]. However, the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was a dual-phased alloy, because the solid solubility of Nb in the alloy could change its composition. According to our previous studies on CrFeCoNiNb x alloys [15,18], the Nb content in the FCC phase of CrFeCoNiNb x alloys changes with the Nb content(x), as shown in Figure 2. The Nb content in the FCC phase of the alloys increases with the Nb content, becoming almost saturated at x = 0.6. In the present study, the Nb content of the alloy was only 5 at.%; thus, its solid solubility in the FCC phase was reduced, again forming the HCP phase in the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy. However, the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy is a five-element alloy, not a binary alloy. The Cr27Fe24Co18Ni26Nb5 alloy was based on the compositions in the FCC phase of CrFeCoNiNb alloy [15]. However, the Cr27Fe24Co18Ni26Nb5 alloy was a dual-phased alloy, because the solid solubility of Nb in the alloy could change its composition. According to our previous studies on CrFeCoNiNbx alloys [15,18], the Nb content in the FCC phase of CrFeCoNiNbx alloys changes with the Nb content(x) , as shown in Figure 2. The Nb content in the FCC phase of the alloys increases with the Nb content, becoming almost saturated at x = 0.6. In the present study, the Nb content of the alloy was only 5 at.%; thus, its solid solubility in the FCC phase was reduced, again forming the HCP phase in the Cr27Fe24Co18Ni26Nb5 alloy. However, the Cr27Fe24Co18Ni26Nb5 alloy is a five-element alloy, not a binary alloy. [15,18]. Figure 3 displays the XRD patterns of the Cr27Fe24Co18Ni26Nb5 alloy in as-cast and asannealed states. Only two phases were detected: one was an FCC phase with a lattice constant of 3.58 Å , and the other was an HCP phase with lattice constants of 4.80 Å (a-axis) 18 Ni 26 Nb 5 alloy in as-cast and as-annealed states. Only two phases were detected: one was an FCC phase with a lattice constant of 3.58 Å, and the other was an HCP phase with lattice constants of 4.80 Å (a-axis) and 7.83 Å (c-axis). Heat treatment did not significantly influence the lattice constants and the relative intensities of these two phases. For example, the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was very stable, even though it was annealed at 1000 • C for 24 h. Figure 4 plots the relationship between the hardness of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy and the annealing time; the annealing temperature was 1000 • C. The hardness of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy remained at approximately 250 HV. This also proves that annealing at 1000 • C had almost no influence on this alloy. The volume fraction of the HCP phase was less than that of the FCC phase; therefore, the coarsening and spheroidizing of the HCP phase had no apparent effect on the hardness of the alloy. The HCP and FCC phases had different compositions and structures; however, the ratios of these two phases did not change significantly. Therefore, the as-cast Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was selected to test its polarization behavior in 1 N H 2 SO 4 and 1 N HCl solutions. and 7.83 Å (c-axis). Heat treatment did not significantly influence the lattice constants and the relative intensities of these two phases. For example, the Cr27Fe24Co18Ni26Nb5 alloy was very stable, even though it was annealed at 1000 °C for 24 h. Figure 4 plots the relationship between the hardness of the Cr27Fe24Co18Ni26Nb5 alloy and the annealing time; the annealing temperature was 1000 °C. The hardness of the Cr27Fe24Co18Ni26Nb5 alloy remained at approximately 250 HV. This also proves that annealing at 1000 °C had almost no influence on this alloy. The volume fraction of the HCP phase was less than that of the FCC phase; therefore, the coarsening and spheroidizing of the HCP phase had no apparent effect on the hardness of the alloy. The HCP and FCC phases had different compositions and structures; however, the ratios of these two phases did not change significantly. Therefore, the as-cast Cr27Fe24Co18Ni26Nb5 alloy was selected to test its polarization behavior in 1 N H2SO4 and 1 N HCl solutions. and 7.83 Å (c-axis). Heat treatment did not significantly influence the lattice constants and the relative intensities of these two phases. For example, the Cr27Fe24Co18Ni26Nb5 alloy was very stable, even though it was annealed at 1000 °C for 24 h. Figure 4 plots the relationship between the hardness of the Cr27Fe24Co18Ni26Nb5 alloy and the annealing time; the annealing temperature was 1000 °C. The hardness of the Cr27Fe24Co18Ni26Nb5 alloy remained at approximately 250 HV. This also proves that annealing at 1000 °C had almost no influence on this alloy. The volume fraction of the HCP phase was less than that of the FCC phase; therefore, the coarsening and spheroidizing of the HCP phase had no apparent effect on the hardness of the alloy. The HCP and FCC phases had different compositions and structures; however, the ratios of these two phases did not change significantly. Therefore, the as-cast Cr27Fe24Co18Ni26Nb5 alloy was selected to test its polarization behavior in 1 N H2SO4 and 1 N HCl solutions. Figure 5a,b, respectively. The curve with a potential lower than the corrosion potential (E corr ) represents the cathodic curve, whereby the alloy under this state would be protected; the curve with a potential higher than the corrosion potential represents anodic curve, whereby the alloy under this state would be corroded. The cathodic line of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy exhibited a Tafel slope (β c ); β c = ∆E/∆logi, where E is the potential, and i is the current density. The current density corresponding to E corr is the corrosion current density (i corr ). The current density of the alloy increases with the applied potential (overvoltage) before decreasing upon passing the anodic peak and entering the passivation region. Figure 5a displays the polarization curves of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy tested in 1 N H 2 SO 4 solution. The corrosion potentials and corrosion current densities increased with test temperature. Furthermore, the current densities of the anodic peaks (i pp ) and passivation regions (i pass ) increased with test temperature. However, all of the passivation regions of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy tested in deaerated 1 N H 2 SO 4 solution retained complete shapes in the temperature range of 30-60 • C. The polarization data, namely, the Tafel slope (β c ), corrosion potentials (E corr ), corrosion current densities (i corr ), passivation potential (E pp , potential of the anodic peak), anodic critical current density of the anodic peak (i pp ), passive current density (i pass ), and breakdown potential (E b ), of the Cr 27 Fe 24 Co 18 18 Ni 26 Nb 5 alloy tested in deaerated 1 N HCl solution were larger than those of the alloy tested in 1 N H 2 SO 4 solution. A large anodic peak indicates that the alloy was harder upon entering the passivation region in 1 N HCl solution. Moreover, breakdown of the passivation region of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy started at a testing temperature of 50 • C, becoming very clear at a testing temperature of 60 • C. This suggests that the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy did not resist the attack from chloride ions at higher temperatures. The corrosion potentials (E corr ) and corrosion current densities (i corr ) of Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy tested in deaerated 1 N HCl solution under different temperatures are also listed in Table 2. Results and Discussion temperature was 1000 °C. The polarization curves of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N H2SO4 solution and 1 N HCl solution under different temperatures are shown in Figure 5a,b, respectively. The curve with a potential lower than the corrosion potential (Ecorr) represents the cathodic curve, whereby the alloy under this state would be protected; the curve with a potential higher than the corrosion potential represents anodic curve, whereby the alloy under this state would be corroded. The cathodic line of the Cr27Fe24Co18Ni26Nb5 alloy exhibited a Tafel slope (βc); βc = E/logi, where E is the potential, and i is the current density. The current density corresponding to Ecorr is the corrosion current density (icorr). The current density of the alloy increases with the applied potential (overvoltage) before decreasing upon passing the anodic peak and entering the passivation region. Figure 5a displays the polarization curves of the Cr27Fe24Co18Ni26Nb5 alloy tested in 1 N H2SO4 solution. The corrosion potentials and corrosion current densities increased with test temperature. Furthermore, the current densities of the anodic peaks (ipp) and passivation regions (ipass) increased with test temperature. However, all of the passivation regions of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N H2SO4 solution retained complete shapes in the temperature range of 30-60 °C. The polarization data, namely, the Tafel slope (βc), corrosion potentials (Ecorr), corrosion current densities (icorr), passivation potential (Epp, potential of the anodic peak), anodic critical current density of the anodic peak (ipp), passive current density (ipass), and breakdown potential (Eb), of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N H2SO4 solution under different temperatures are listed in Table 2. The polarization curves of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N HCl solution under different temperatures are shown in Figure 5b. The corrosion potentials and corrosion current densities of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N HCl solution increased with test temperature, similar to the results of the alloy tested in deaerated 1 N H2SO4 solution. However, the anodic peaks of the Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N HCl solution were larger than those of the alloy tested in 1 N H2SO4 solution. A large anodic peak indicates that the alloy was harder upon entering the passivation region in 1 N HCl solution. Moreover, breakdown of the passivation region of the Cr27Fe24Co18Ni26Nb5 alloy started at a testing temperature of 50 °C, becoming very clear at a testing temperature of 60 °C. This suggests that the Cr27Fe24Co18Ni26Nb5 alloy did not resist the attack from chloride ions at higher temperatures. The corrosion potentials (Ecorr) and corrosion current densities (icorr) of Cr27Fe24Co18Ni26Nb5 alloy tested in deaerated 1 N HCl solution under different temperatures are also listed in Table 2. Figure 6 shows the Arrhenius plot of relationships between corrosion current density of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy and the test temperature in the two solutions. The relationship between corrosion current density and testing temperature satisfied the relationship of i corr = Aexp(−Q/RT), where i corr is the corrosion current density, A is a constant, Q is the activation energy, R is the gas constant, and T is the temperature. Therefore, the activation energy Q could be calculated by plotting lni corr vs. 1/T, as shown in Figure 6. The activation energies of corrosion of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in 1 N H 2 SO 4 solution and 1 N HCl solution were 27.7 and 52.9 kJ/mol, respectively. Thus, the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy tested in 1 N HCl solution had a larger activation energy. Figure 6 shows the Arrhenius plot of relationships between corrosion current density of the Cr27Fe24Co18Ni26Nb5 alloy and the test temperature in the two solutions. The relationship between corrosion current density and testing temperature satisfied the relationship of icorr = Aexp(−Q/RT), where icorr is the corrosion current density, A is a constant, Q is the activation energy, R is the gas constant, and T is the temperature. Therefore, the activation energy Q could be calculated by plotting lnicorr vs. 1/T, as shown in Figure 6. The activation energies of corrosion of the Cr27Fe24Co18Ni26Nb5 alloy in 1 N H2SO4 solution and 1 N HCl solution were 27.7 and 52.9 kJ/mol, respectively. Thus, the Cr27Fe24Co18Ni26Nb5 alloy tested in 1 N HCl solution had a larger activation energy. The morphologies of the Cr27Fe24Co18Ni26Nb5 alloy after polarization in these two solutions are shown in Figure 7; both cases showed a uniform corrosion morphology. Both the FCC and the HCP phases in the Cr27Fe24Co18Ni26Nb5 alloy were corroded after polarization in 1 N H2SO4 solution at 30 and 50 °C, as shown in Figure 7a,b, respectively. However, the FCC phase was severely more corroded than the HCP phase. Figure 7c,d show The morphologies of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy after polarization in these two solutions are shown in Figure 7; both cases showed a uniform corrosion morphology. Both the FCC and the HCP phases in the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy were corroded after polarization in 1 N H 2 SO 4 solution at 30 and 50 • C, as shown in Figure 7a,b, respectively. However, the FCC phase was severely more corroded than the HCP phase. Figure 7c,d show the morphologies of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy after polarization in 1 N HCl solution at 30 and 50 • C, respectively. The FCC phase was also severely corroded, but the HCP phase did not display any corrosion and kept its original shape. Therefore, the FCC phase was the major corroded phase of this alloy in both solutions. the morphologies of the Cr27Fe24Co18Ni26Nb5 alloy after polarization in 1 N HCl solution at 30 and 50 °C, respectively. The FCC phase was also severely corroded, but the HCP phase did not display any corrosion and kept its original shape. Therefore, the FCC phase was the major corroded phase of this alloy in both solutions. Conclusions The microstructure and corrosion behavior of the Cr27Fe24Co18Ni26Nb5 alloy were studied. This alloy had a dendritic structure with two phases, FCC and HCP. The dendrites were in the FCC phase, whereas the interdendrites exhibited a dual-phased eutectic structure. The Cr27Fe24Co18Ni26Nb5 alloy maintained its structure and hardness after annealing at 1000 °C for 24 h. All polarization curves of the Cr27Fe24Co18Ni26Nb5 alloy displayed complete shapes in the temperature range of 30-60 °C in 1 N H2SO4 solution, but a breakdown of the passivation region of the Cr27Fe24Co18Ni26Nb5 alloy in HCl solution was observed at 50 °C. The Cr27Fe24Co18Ni26Nb5 alloy showed uniform corrosion morphologies in both 1 N H2SO4 and 1 N HCl solutions. The corrosion activation energies of the Cr27Fe24Co18Ni26Nb5 alloy in 1 N H2SO4 and 1 N HCl solutions were 27.7 and 52.9 kJ/mol, respectively. However, the FCC phase in the Cr27Fe24Co18Ni26Nb5 alloy was severely more corroded than the HCP phase. Conclusions The microstructure and corrosion behavior of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy were studied. This alloy had a dendritic structure with two phases, FCC and HCP. The dendrites were in the FCC phase, whereas the interdendrites exhibited a dual-phased eutectic structure. The Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy maintained its structure and hardness after annealing at 1000 • C for 24 h. All polarization curves of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy displayed complete shapes in the temperature range of 30-60 • C in 1 N H 2 SO 4 solution, but a breakdown of the passivation region of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in HCl solution was observed at 50 • C. The Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy showed uniform corrosion morphologies in both 1 N H 2 SO 4 and 1 N HCl solutions. The corrosion activation energies of the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy in 1 N H 2 SO 4 and 1 N HCl solutions were 27.7 and 52.9 kJ/mol, respectively. However, the FCC phase in the Cr 27 Fe 24 Co 18 Ni 26 Nb 5 alloy was severely more corroded than the HCP phase.
2021-10-15T15:31:50.755Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "8920035ddb7ab5ed79ff4045d4de96c9e55678ed", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/20/5924/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80e02b9d390f6371c3edad18a96998077a07de45", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
35738517
pes2o/s2orc
v3-fos-license
Evidence for a Trypanosoma brucei Lipoprotein Scavenger Receptor African trypanosomes are lipid auxotrophs that live in the bloodstream of their human and animal hosts. Trypanosomes require lipoproteins in addition to other serum components in order to multiply under axenic culture conditions. Delipidation of the lipoproteins abrogates their capacity to support trypanosome growth. Both major classes of serum lipoproteins, LDL and HDL, are primary sources of lipids, delivering cholesterol esters, cholesterol, and phospholipids to trypanosomes. We show evidence for the existence of a trypanosome lipoprotein scavenger receptor, which facilitates the endocytosis of both native and modified lipoproteins, including HDL and LDL. This lipoprotein scavenger receptor also exhibits selective lipid uptake, whereby the uptake of the lipid components of the lipoprotein exceeds that of the protein components. Trypanosome lytic factor (TLF1), an unusual HDL found in human serum that protects from infection by lysing Trypanosoma brucei brucei, is also bound and endocytosed by this lipoprotein scavenger receptor. HDL and LDL compete for the binding and uptake of TLF1 and thereby attenuate the trypanosome lysis mediated by TLF1. We also show that a mammalian scavenger receptor facilitates lipid uptake from TLF1 in a manner similar to the trypanosome scavenger receptor. Based on these results we propose that HDL, LDL, and TLF1 are all bound and taken up by a lipoprotein scavenger receptor, which may constitute the parasite's major pathway mediating the uptake of essential lipids. Introduction Exogenous lipids play indispensable roles in trypanosome cell structure and metabolism. African bloodstream-form trypanosomes are single-celled parasites that appear not to synthesize fatty acids de novo (1)(2)(3), with the exception of myristate (C14 fatty acid). Trypanosomes have an atypical type II fatty acid synthase that utilizes exogenously supplied butyrate to generate myristate which is used exclusively for glycosylphosphatidylinositol anchor biosynthesis (4,5). Despite having a variety of enzymes that catalyze metabolic lipid-modifying pathways (6)(7)(8)(9), trypanosomes are lipid auxotrophs. They require lipoproteins in addition to other serum components in order to multiply under axenic culture conditions (10,11). Delipidation of the lipoproteins abrogates their capacity to support trypanosome growth. Both major classes of serum lipoproteins, LDL and HDL, are primary sources of lipids, delivering cholesterol esters, cholesterol and phospholipids to trypanosomes (12,13). Trypanosomes endocytose HDL and LDL through their flagellar pocket (10,13). All endocytosis and exocytosis in trypanosomes occurs at this site. It is a specialized invagination in the cell membrane, which is not lined with the microtubule network that encases the rest of the cell that precludes any vesicular fusion or fission. At physiological concentrations (~1 mg/ml), specific binding and uptake of the protein component of both LDL and HDL has been demonstrated (12)(13)(14). In contrast, at sub-physiological concentrations (1-50 µg/ml) there was no detectable uptake of the apolipoproteins themselves, whereas the lipid components of HDL and LDL were taken up at rates that exceeded fluid phase endocytosis by 1000-fold, suggesting that "specific binding sites were probably involved" (15). A putative LDL receptor protein has been by guest on March 24, 2020 http://www.jbc.org/ Downloaded from Trypanosome Lipoprotein Scavenger Receptor 4 purified but not yet cloned (16,17), while there has been no molecular identification of an HDL receptor in bloodstream-form trypanosomes. Trypanosome lytic factors are HDL related particles found in human plasma. TLF1 contains lipid, apolipoprotein A-I (apoA-I), paraoxonase and haptoglobin related protein (Hpr) (18), while TLF2 is a lipid-poor molecule that contains apoA-I, Hpr, and IgM (19). Both high and low affinity binding sites for TLF1 on trypanosomes have been reported in experiments using purified preparations of TLF1 (20). The low affinity binding site can be competed by HDL whereas the high affinity binding site is partially competed by reconstituted nonlytic HDL containing Hpr (21), which led to the proposal that Hpr can mediate TLF1 binding to trypanosomes through a haptoglobin-like receptor. Many lipoprotein receptors have been characterized in eukaryotes, to date only cubilin (22) and members of the CD36 superfamily of scavenger receptors (23)(24)(25) bind native HDL (without requiring ApoE as a component). The CD36 superfamily of scavenger receptors bind and take up both native HDL and LDL as well as other polyanionic ligands, including oxidized and acetylated LDL (26). Some of these scavenger receptors mediate bi-directional lipid flux and exhibit a process called selective lipid uptake. In polarized cells selective lipid uptake is characterized by receptor-mediated uptake of the lipoprotein, distribution of the lipid within the cell, and recycling of the apolipoprotein to the cell surface (27). In non-polarized cells there does not appear to be any uptake of the holo-particle, rather binding to the surface of the cell via lipoprotein scavenger receptors facilitates the transfer of lipid from the lipoprotein into cell membranes and intracellular vesicles (28). After lipid transfer, the lipid-depleted particle is by guest on March 24, 2020 http://www.jbc.org/ Downloaded from Trypanosome Lipoprotein Scavenger Receptor 5 released intact from the cell. One of the members of this family, SR-BI (scavenger receptor class BI), mediates the highest level of selective lipid uptake analyzed to date (29,30). While studying trypanosome lytic factors, which are by definition lipoproteins, we decided to revisit lipoprotein receptors. We found evidence that T. b. brucei has a lipoprotein scavenger receptor that mediates the selective uptake of lipid over the protein component of both HDL and LDL. The same receptor can also mediate the uptake of oxidized lipoproteins. TLF1 is also bound and endocytosed by this lipoprotein scavenger receptor. We show that HDL and LDL compete for the binding and uptake of TLF1 and therefore attenuate the trypanosome lysis mediated by TLF1. HDL and LDL compete for HDL uptake by trypanosomes. We labeled HDL and LDL with Alexa, a fluorophore that conjugates to the free amino groups in the protein components of these lipoproteins. We found that trypanosomes accumulated HDL protein (2.25 pmol, calculated based on a molecular mass of 350,000 Da by size exclusion chromatography, 50% of which is protein) and LDL protein (2 pmol uptake from labeled HDL. Fig. 1 panel D shows that 4 times more HDL than LDL was needed to give a 50% reduction in the uptake of HDL protein. This suggests that the putative lipoprotein receptor has higher affinity for LDL than HDL. Although Fig. 1 (Fig. 3 B) with a distribution similar to endocytosed concanavalin A (conA) (Fig 3 C). Concanavalin A has been shown to distribute within endocytic vesicles of trypanosomes when endocytosed by live trypanosomes (35,36), in contrast conA labels the VSG coat when used on fixed trypanosomes presumably due to the exposure of carbohydrate epitopes upon fixation. Coincubation with rhodamine-conA and Alexa-HDL revealed colocalization in some endocytic vesicles (yellow) near the flagellar pocket but not all endocytic vesicles (red) (Fig. 3 D). to T. b. brucei (Fig. 4). We did not investigate the effect of LDL on the binding of TLF1 to T. b. HDL competes for the binding of TLF1 to trypanosomes brucei, because LDL takes 6 hours to reach equilibrium binding to trypanosomes whereas HDL Given that we observed competition for binding of TLF1 to trypanosomes by HDL, we evaluated the effect of non-lytic bovine HDL on TLF1-mediated trypanolysis. We observed that non-lytic bovine HDL was able to attenuate trypanosome lysis by purified TLF1 (Fig. 5). Nonlytic human LDL was also effective in attenuating trypanolysis by TLF1. stained readily with anti-mSR-BI (Fig. 6, insert). HDL labeled with the fluorescent lipid DiI exhibited lipid uptake into cells expressing mSR-BI that was 30-fold greater than the uptake by the parental ldlA cells (Fig. 6) previously characterized biochemically. These include an LDL receptor (10) and a HDL receptor (13) both of which may be identical to the scavenger receptor described here (see below), a haptoglobin-like receptor which may also be a TLF receptor (21), and a receptor for transferrin which has been molecularly cloned (38)(39)(40)(41)(42)(43). TLF1 binds to mouse Scavenger Receptor Class B type I and donates lipids The characterization of this trypanosome scavenger receptor serves to unify a variety of disparate data regarding the utilization of lipoproteins and TLF by the parasite. Vandeweerd et. al., showed that the uptake at 37 o C of radiolabeled lipid components in either HDL or LDL was inhibited (50-85%) by unlabeled HDL or LDL (15). It was concluded that the uptake process did not discriminate between HDL or LDL. In this study we have confirmed and extended these observations. We also found that accumulation of HDL labeled protein or HDL labeled lipid, was inhibited by increasing concentrations of HDL and LDL (Fig 1C and 1D (45), are also ligands for eukaryotic lipoprotein scavenger receptors (46,47). Taken together, these results suggest the presence of a lipoprotein scavenger receptor in trypanosomes that can bind multiple ligands. The trypanosome lipoprotein scavenger receptor shares characteristics with certain subclasses of mammalian scavenger receptors. Members of the CD36 superfamily can bind native HDL and LDL and exhibit selective lipid uptake from both lipoproteins. Binding appears to be mediated by a combination of apolipoprotein and lipid. These characteristics most resemble what we have found for the putative trypanosome lipoprotein scavenger receptor. We find that when we correct for the specific activity of each labeled lipoprotein, the lipid components are selectively accumulated more than the protein component (Fig. 2). It is worth noting that although cholesterol/ cholesterol ester is taken up 3-4 fold more than phospholipid, it only comprises 36% of native HDL lipids relative to 55% for phosphatidyl choline. The selective uptake of lipoprotein cholesterol/cholesterol ester over phospholipid has also been characterized in SR-BI scavenger receptors, which are a subclass of the CD36 superfamily (28,48). Ligands other than native HDL and LDL have been identified for eukaryotic lipoprotein scavenger receptors, such as oxidized lipoproteins (46). Oxidized LDL is a ligand for the trypanosome lipoprotein receptor, in that native HDL and LDL or oxidized lipoproteins (not shown) were effective competitors for uptake. Native HDL was a consistently better competitor than native LDL when measuring oxidized LDL uptake by trypanosomes. On the other hand, native LDL was a better competitor than native HDL when measuring HDL uptake in trypanosomes (Figs. 1C and 1D). It has been shown for CD36 that oxidized LDL competes more effectively than LDL for HDL binding to CD36 (24). The similar biochemical properties of the trypanosome lipoprotein scavenger receptor and mammalian CD36 superfamily members compelled us to analyze the interaction of TLF1 with the prototypical class B eukaryotic lipoprotein scavenger receptor, SR-BI, which exhibits the highest degree of selective lipid uptake (29,30). We found that like HDL, TLF1 is able to donate lipids via this eukaryotic lipoprotein scavenger receptor (Fig. 6), indicating that TLF1 can bind to and productively interact with SR-BI. Although there was a specific association of TLF1 with CHO cells expressing SR-BI we did not observe any obvious toxicity at physiological concentrations of TLF1 (~20 µg/ml). Both apolipoprotein and lipid are taken up by the parasite. The lipid is selectively removed from the lipoprotein and distributed throughout the cell (Fig. 3A). The apolipoprotein localizes to by guest on March 24, 2020 http://www.jbc.org/ Downloaded from endocytic vesicles (Fig. 3D). The distribution of protein is very different from that seen for lipid, suggesting that at some point after interaction with a receptor the lipid is selectively removed and dispersed throughout the cell. The current hypotheses for HDL uptake in eukaryotic cells involve either retro-endocytosis or the formation of a non-polar channel, created by the binding of the apolipoprotein at the cell surface (28), through which lipids are delivered. We have demonstrated that HDL apolipoprotein is found inside the trypanosome (Fig. 3D). Because we detect intracellular HDL and we do not detect degradation (not shown), the majority of the endocytosed apolipoprotein may be recycled back to the cell surface. Other researchers have found that trypanosome endocytosis of HDL (13) and more recently TLF1 (49) do not result in the degradation of the apolipoproteins. In contrast, apolipoprotein B of LDL is rapidly degraded during its transit through the endocytic machinery (50). The reason for this difference in proteolytic processing is not known, it may be due to differential routing of the ligands in the endocytic pathway or differential sensitivity to endosomal and lysosomal proteases. Physiological concentrations of HDL can compete for at least ~80% of the binding of TLF1 to T. b. brucei (Fig. 4). It has been proposed that the remaining ~20% of TLF1 is taken up by another trypanosome receptor that recognizes haptoglobin (21). Irrespective of whether there are one or more receptors for TLF, if there is competition for binding of TLF, there should be competition for uptake. It has been reported that TLF1 exhibits both high affinity (0.75-3.6 µg/ml) and low affinity (80-175 µg/ml) binding to trypanosomes, and that only the low affinity sites are competed by HDL (20,21). We believe, as has been proposed by others for LDL binding to trypanosomes (10,51), that the low affinity sites for TLF1 may represent single receptors along the flagellum and within the pocket, whereas high affinity sites represent by guest on March 24, 2020 http://www.jbc.org/ Downloaded from dimerized or clustered receptors within the flagellar pocket. Complete inhibition of binding or uptake at the receptor level, requires the competing ligand to be at least 100 fold above its own Kd in order to saturate all of the available receptors (52). Therefore complete inhibition of TLF1 binding and uptake at the receptor level would require 2.7-8 mg/ml of HDL (13,21), and 13-33 mg/ml LDL (44). These concentrations are above the physiological levels found in plasma which are ~1 mg/ml. Therefore, in vivo as in our assay, the lipoproteins would be able to attenuate the killing by TLF1 but they would not inhibit the killing. This is illustrated by the attenuation of TLF1 mediated lysis (Fig. 5) in the presence of HDL (1-1.6 mg/ml) and LDL (0.75-1 mg/ml). Oxidized lipoproteins were in themselves trypanolytic 2 , and we therefore could not evaluate their capacity to attenuate TLF mediated lysis.
2018-04-03T02:57:53.691Z
2003-01-03T00:00:00.000
{ "year": 2003, "sha1": "4c5fd33cbdad82044f9f66b4a54801c95913ed2a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/278/1/422.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ced75e0e956339ca470924ef6abb7a537381e741", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
207943511
pes2o/s2orc
v3-fos-license
Occurrence, Genetic Diversities And Antibiotic Resistance Profiles Of Salmonella Serovars Isolated From Chickens Purpose Contamination with Salmonella on food products and poultry in particular has been linked to foodborne infections and/or death in humans. This study investigated the occurrence, genetic diversities and antibiotic resistance profiles of Salmonella strains isolated from chickens. Patients and methods Twenty each duplicate faecal swab samples were collected from five different poultry pens of broilers, layers and indigenous chickens in the North-West Province, South Africa. Isolates identities were confirmed through amplification and sequence analysis of 16S rRNA and the invA gene fragments after which phylogenetic tree was constructed. Salmonella enteritidis (ATCC:13076TM), Salmonella Typhimurium (ATCC:14028TM) and E. coli (ATCC:259622TM) were used as positive and negative controls, respectively. The serotypes of Salmonella isolates were determined. Antibiotic-resistant profiles of the isolates against eleven antimicrobial agents were determined. Results Eighty-four (84%) of representative isolates possessed the invA genes. The percent occurrence and diversity of Salmonella subspecies in chickens were 1.81–30.9% and was highest in Salmonella enterica subsp. enterica. Notably, the following serotypes Salmonella bongori (10.09%), Salmonella Pullorum (1.81%), Salmonella Typhimurium (12.72%), Salmonella Weltevreden, Salmonella Chingola, Salmonella Houten and Salmonella Bareily (1.81%). Isolates (96.6%) displayed multidrug resistance profiles and the identification of isolates with more than nine antibiotic resistance was a cause for concern. Conclusion This study indicates that isolates had pre-exposure histories to the antibiotics tested and may pose severe threats to food security and public health. Introduction Salmonella spp. are enteric pathogens that have received a lot of attention due to their ability to cause food-borne diseases and high rates of mortality amongst humans and thus were declared as agents of public health significance. 1,2 Salmonellosis is the most common food-borne disease caused by Salmonella species in humans with symptoms ranging from headache, vomiting, fatigue, nausea, bloody diarrhea, gastroenteritis, and abdominal cramps and self-limiting for which often no antimicrobials are prescribed for its control. [3][4][5] Hence, the pathogen is capable of causing socio-economic and public health implications to humans. Salmonella enterica serotype Typhimurium (S. Typhimurium) and Salmonella enterica serotype Enteritidis (S. Enteritidis) are considered of high health importance due to their ability to cause salmonellosis in humans and veterinary animals in both developed and developing countries of the world. Salmonella has been highlighted as economically important zoonotic pathogens by the World Health Organisation (WHO) and the Food Agriculture Organisation (FAO) dated back to 1950s. 1 Salmonella spp. have been enteric pathogens co-existing with pathogens such as Escherichia coli, Klebsiella spp., and Proteus spp. 6 According to Kagambèga et al, 5 ruminants such as cattle and sheep, non-ruminantspigs, dogs, rodents, poultry, birds, and cold-blooded animals such as fish and lizards, and humans have been implicated as reservoirs of the typhoidal and the non-typhoidal Salmonella species. However, Poultry and its products are the major sources of Salmonella-borne infection in the food chain. The ability of Salmonella to be transmitted from reservoirs to other animals and humans calls for concern. Thus, making its survey and control among suspected reservoirs such as chickens is necessary. The influx of many antibiotic resistance strains within the environment calls for a concern. Antibiotic resistance is currently a global problem that poses a threat to public health. Therefore, a study to investigate the occurrence and antibiotic resistance profiles of Salmonella among chicken whose carcasses forms a major of the South African cuisines germane. Sample Collection This study was conducted within farms located at Ngaka Molema Modiri District of Mafikeng, North West Province, South Africa. The study site's (Mafikeng) geographical coordinates are 25°52ʹ 0" South, 25°39ʹ 0" East. Twenty samples each were collected in duplicates from five different poultry pens, housing broiler, layer and indigenous chickens in the study area. The broilers and layers belonged to the White Leghorn breed while the indigenous belonged to Potchefstroom koekoek breed (Gallus gallus domesticus). Swabs from the gut were aseptically collected in duplicates from test animals and transported on ice to the laboratory for analysis within 24 hrs of collection. Ethical clearance for the study was obtained from the Mafikeng Animal Research Ethics Committee of the North West University prior to the commencement of sampling. Samples were also collected under the supervision of trained Veterinarians and Animal Health Technicians from the Centre for Animal Health Studies, North West University, South Africa. Isolation Of Microbial Isolates Isolation of Salmonella spp. from chickens was done using ISO-6579:2002 procedure. 8 Sample pre-enrichment and enrichment were achieved in buffered peptone water and tetrathionate broth, respectively, prior to enrichment in Rappaport vassiliadis Soy (RVS) broth and incubated at 42°C for 24 hrs. About 1 mL of the inoculated RVS broth was plated on sterile Salmonella -Shigella Agar (SSA) and was incubated aerobically at 37°C for 18 hrs. Colonies having creamy with or without black centre on SSA were regarded as presumptive Salmonella isolates and were further studied. Sub-culturing was done until pure colonies were obtained. Morphological And Biochemical Characterization Of Isolates The morphological and biochemical tests (Gram staining, catalase, Simmons citrate test, urease and Triple sugar iron (TSI) agar) were determined as described by Ateba and Mochaiwa. 9 Gram-negative rods and catalase-positive colonies were kept on double-strength slants and kept under −20°C for further use. Molecular Characterisation Of Isolates The amplification of 16S rRNA region of the bacteria was employed for the discrimination of presumptive Salmonella isolates. The DNA was extracted using a Fungal/Bacterial DNA extraction kit (Zymo Research Corporation, Southern California, USA) following the manufacturer's specification. The pure eluted DNA was stored at −80°C for further analysis. The pure DNA was quantified using a Nanodrop Lite spectrophotometer (Model 1558) obtained from Thermo Scientific, USA, and the genomic DNA was quantified on a 1% agarose gel. The presence of fluorescence band when viewed under the UV Transilluminator (Biorad Gel Doc TM XR+) confirmed the presence of DNA of presumptive Salmonella isolates. PCR Amplification Of 16S rRNA The 16S ribosomal RNA (16S rRNA) PCR was employed in the identification of Salmonella isolates. 10 The amplification was done using a Biorad C1000 Touch TM Thermal Cycler. The 27F (5 1 -AGAGTTTGATCCTGGCTCAG-3 1 ) and 1492R (5 1 -GGTTACCTTGTTACGACTT -3 1 ) primers used were synthesized at Inqaba Biotechnical Industries (Pty) Ltd, South Africa, having an expected amplicon size of 1450 bp. For the polymerase chain reaction (PCR), a 25 µL reaction mix composed of 12 µL of master mix (Thermo Scientific PCR Master Mix 2X), oligonucleotides (1 µL), DNA template (4 µL) and nuclease-free water (7 µL) was used. 10 The negative controls used include Aspergillus flavus and water as a template in the PCR assays while the positive was Salmonella Typhimurium ATCC 14028 TM . Amplification Of The invA Gene In Presumptive Salmonella Isolates The invA gene fragment was amplified using the set of primers invA F (5 1 -GTGAAATTATCGCCACGTTCGGG CAA-3 1 ) and invA R 5 1 -TCATCGCACCGTCAAAGG AACC −3 1 ) with expected amplicon size of 284 bp. Slight modifications in the annealing temperature previously reported by Ateba and Mochaiwa 9 were used: initial denaturation (95°C for 2 mins), denaturation (95°C for 15 s), annealing (47.8°C for 1 min), elongation (72°C for 45 s) and final elongation (72°C for 7 mins). A 25 µL reaction mix was used in the amplification, and this is composed of 12.5 µL of master mix (Thermo Scientific PCR Master Mix 2X), oligonucleotides (1 µL), DNA template (4 µL) and nuclease-free water (6.5 µL). A positive control (Salmonella Typhimurium ATCC 14028) and a negative control (Escherichia coli ATCC 25922 and non-template water) were used. Gel Electrophoresis Of Amplicons The molecular weight of PCR amplicons was determined by gel electrophoresis. 11 A DNA marker (Fermentas Life Science, Lithuania) of 1 kb was used and the gel was allowed to run at 60 volts, 400 amperes for 60 mins in 1% tris acetate ethylenediamineacetate (TAE) buffer before photographing under the UV transilluminator light (Biorad Gel Doc TM XR+). Gene Sequencing And Identification Of Isolates The amplified product was sequenced using an automated DNA sequencer (SpectruMedix model SCE 2410) at Inqaba Biotechnical Industries (Pty) Ltd Pretoria. Resulting sequences were cleaned using the FinchTV software version 1.4.0 (Geospiza Inc.) and blasted against on the National Centre for Biotechnology Information 12 database using the Nucleotide Basic Local Alignment Search Tool (BLAST) program (http://www.ncbi.nlm.nih.gov/BLAST). Isolates were identified based on the highest percentage of similarity and sequences were deposited in NCBI gene bank and accession numbers were obtained. The serotypes of presumptive Salmonella isolates were determined using the Salmonella antisera agglutination kits. Isolates were then classified into serotypes as described in the Kauffman-White Salmonella classification. Phylogenetic Tree Construction Cleaned sequences were aligned by CLUSTALW sequence alignment tool and de-gapped using Bio-Edit software package. 13,14 To identify putative close phylogenetic relatives, multiple sequence alignments were obtained using Clustal-W against corresponding nucleotide sequences retrieved from the Gene bank. The evolutionary distance matrices were generated. 15 Phylogenetic analysis was done using the neighbour joining method 16 in MEGA program version 5.10. 17 The bootstrap analysis was done using 1000 replications for neighbour joining. The sequences were checked for putative chimeric artefacts using the Chimera-Buster program and then manipulation and tree editing was done using the Tree View option. 18 Salmonella enterica was used as the root to the tree. Determination Of Antibiotics Resistance Profile Of Salmonella Isolates Antibiotic sensitivity of Salmonella isolates was investigated against eleven antibiotics belonging to eight different classes using the disc diffusion method. 19 Antibiotics used include; ampicillin (10 µg), oxy-tetracycline (30 µg), ciprofloxacin (5 µg), streptomycin (10 µg), gentamicin (10 µg), sulphamethoxazole/trimethoprim (300 µg), chloramphenicol (30 µg), erythromycin (15 µg), norfloxacin (10 µg), cephalothin (30 µg), and nalidixic acid (30 µg). Antibiotics discs were placed at an equilateral distance to each other on Muller-Hinton agar (MHA) plates and were incubated at 37°C for 18 hrs. After incubation, zones of inhibition around the antibiotics disc were measured using a meter rule graduated in millimetres. The test was made in triplicate and the mean diameter of the inhibitory zones (IZD) were calculated. The mean IZD was determined as either susceptible, intermediate, or resistant using the Clinical and Laboratory Standards Institute 20 criteria. The multiple antibiotics resistance (MAR) phenotypes were recorded for isolates showing resistance to more than two antibiotics 21 Clustering Of Antibiotic-Resistant Patterns Of Salmonella Isolates To determine the similarities and differences between Salmonella isolates from different sources based on their antibiotic resistance patterns, cluster analysis was done. The IZDs of Salmonella strains were clustered using a cluster analysis on the Statistica software package (Statsoft, USA) and a dendrogram was generated. Ward's method and the Euclidean distance method were used to generate the clusters. Statistical Analysis The statistical analysis of data generated was evaluated using Statistical package for Social Sciences (SPSS, version 21.0 IBM Corp., USA). The frequency and percentage of occurrence of isolates and correlations between isolates antibiotics resistance and sources were determined using Pearson's product. The cluster analysis of antibiotics sensitivity patterns of Salmonella isolates was evaluated through the Ward's algorithm and Euclidean distances on the Statistica software version 7.0 (Statsoft, USA). Significance and goodness of fit were evaluated at 95% confidence interval while sequence algorithms were cleaned and processed using FinchTv, Bioedit and the phylogenetic tree was constructed using the MEGA6 Software's. Results The morphological and biochemical characteristics of presumptive Salmonella isolates from chickens in Mafikeng community, South Africa, is as presented in Supplementary material S1. Colonies pigment morphology ranged from pink to colourless with/without black centre, on Salmonella Shigella Agar. As shown in Supplementary material A1, 96 percent of colonies had a circular shape while the opacity ranged from 83.63% (opaque) and translucent (16.36%). All selected isolates were gram-negative rods having the ability to hydrolyze hydrogen peroxide in the production of catalase enzyme. Isolates showed alkalinity by a red colour pigment on slants, yellow butt with or without gas, thus signifying acid production while black pigment in butt showed hydrogen sulphide production which is typical of Salmonella. About 81.81% of the isolates were positive to alkalinity, 16.36% were negative while 3.63% had weak alkalinity reaction while 96.36% were able to produce acid. A number of 96.36% of the presumptive isolates had the ability to utilize citrate as a sole source of carbon and energy, while 98.18% tested negative to urease and indole production. Figure 1 presents the gel picture of 16 S rRNA amplification of representative presumptive Salmonella isolates from chickens. The 16 S rRNA amplification was performed twice to ensure reliability of obtained results. There was a 100% positive amplification at an expected band size of 1450 bp. There was a positive amplification of Salmonella Typhimurium ATCC 14028 in lane 1 while no amplification was observed in lane 14 (negative control). The positive amplification confirms the use of 27F and 1497R sets of oligonucleotides for 16 S rRNA region amplification in enteric bacteria. Salmonella-specific PCR was conducted using the invA genes. As shown in Figure 2, about 87.27% of the representative isolates showed positive amplification while about 12.72% were negative. As shown in Table 1, the percent similarity of isolates to data in the NCBI gene bank ranged from 85% to 99%. In percent, a portion of 26% had 99% similarity while 85% had 94% similarity to Salmonella. The percent occurrence of Salmonella in chicken is presented in Figure 3. The percent occurrence based on subspecies ranged from 2% to 61% (Supplementary material S2). The highest occurring subspecies belongs to the Salmonella enterica subsp. enterica (61%) while the least occurring serotype was Salmonella Salamae, Salmonella Weltevreden, Salmonella Chingola, Salmonella Houten, Salmonella Bareilly (2%). Based on source, Salmonella Typhimurium was highest in indigenous chickens (9.08%) followed by layers (3.63%), while in broilers, Salmonella Arizona was highest as described in Figure 3. Salmonella Salamae was not isolated in the indigenous and broiler chickens from the study site. Likewise, in layers, Salmonella Arizona and Salmonella Weltevreden were not isolated except in indigenous breeds and in broiler chickens. Autoagglutination was obtained in some Salmonella bongori and Salmonella enterica subspecies enterica isolates, hence the inability to determine the serotypes of these isolates. Non-specific agglutination as a product of loss of antigen expression could give pseudo-positive results as earlier reported by. 23,24 Based on the cluster algorithm of the Neighbour Joining method used, the percent evolutionary relatedness of Salmonella isolates is presented in Figure 4. The evolutionary distance of Salmonella isolates was 35.13655429. Most of the Salmonella isolates were found to evolve from the same ancestral origin with similarities higher than 70% and comparable to strains sourced from the gene bank. Salmonella enterica subsp. enterica (MG663457, MG663509, MG663461 and MG663502) were found to evolve from the same ancestor which we presumed to be Salmonella spp. However, a genetic evolution was observed in the isolates with a 72% homology compared to MG663500, MG663459, and MG663456 having 99% homology to the genetic sequences of the parent's genome. Also, Salmonella bongori (MG663487) exhibited a 100% concatenated homology with other Salmonella isolates. All the comparable sequences from gene bank, Salmonella spp. (KU641443), Salmonella Arizonae (CP006693) and Salmonella bongori (KR350635), showed relatedness and were comparable to sequences identified as Salmonella bongori (MG663486), MG663492 and Salmonella Blockley (MGG3495). Isolates Salmonella enterica subsp. enterica (MG663485, MG663489, MG663485 and MG663462) had 100% evolutionary relation to the parental genus. Salmonella Houten (MG663464) and Salmonella Heidelberg (MG663483) clustered together on the same cladograph. Salmonella enterica subsp. enterica (MG663464) had 100% homology to an outgroup (Salmonella enterica subsp. enterica KY656601). However, Salmonella enterica subsp. enterica (MG663468) was similar to Salmonella Enteritidis (CP018655) and had a 94% homology to Salmonella Typhimurium (MG663465). Salmonella Typhimurium (MG663510, MH086979 and MG663473) had the same node showing that they both evolved from the same ancestor. Table 2 presents the antibiotic sensitivity profile of Salmonella isolates from chickens in Mafikeng. About 56% of the total Salmonella isolates were resistant to ampicillin treatment, 18% had intermediate resistance while 26% were susceptible (Supplementary material S3). The ampicillin-resistant strains were found more in the indigenous chickens (81%) followed by broilers (36%) and lowest in the layers (27%). About 69% of all Salmonella isolates were resistant to oxy-tetracycline with 9% being intermediate-resistant. Higher occurrence of oxy-tetracycline Ninety-five percent resistance to streptomycin was obtained in this study and was dominant in the layers. The percent resistance to trimethoprim/sulphamethoxazole ranged from 64% to 84% in the different samples investigated. When exposed to chloramphenicol, only a small proportion (8-20%) of isolates were resistant to this drug. Against erythromycin, an 100% resistance was observed and was found not to depend on sample source. Hence, the use of erythromycin in the treatment of Salmonella-borne infection should be avoided. The multiple antibiotic resistance index and antibiotic-resistant phenotypes of Salmonella isolate are presented in Table 3 Notes: Lane 1 = control 1 = Salmonella Typhimurium (positive control); control 2 = Escherichia coli (negative control) was an environmental strain. Abbreviations: +VE, positive amplification; −VE, negative amplification; AAG, auto-agglutination against antisera; AG, positive agglutination. Salmonella Koessen, Salmonella India, Salmonella Crossness, Salmonella Yovokome (AMP-STR-SXT-ERY-KF) had penta-resistant phenotype patterns. An octa-antibiotics resistance was obtained in Salmonella Houten, Salmonella Bovismorbificans, Salmonella Blegdam, Salmonella Typhimurium and Salmonella bongori. Salmonella Typhimurium isolated in this study were found to belong to the indigenous chickens only, with a MAR index ranging from 0.72 to 0.81. This is indicative of a high multi-antibiotic resistance profiles against the main streams of antibiotics often prescribed in the treatment of Salmonella infections in both humans and animals. All octa-antibiotic-resistant strains had resistance to nalidixic acid except Salmonella Blegdam and Salmonella bongori. The relatedness and differences between the MARresistant strains of Salmonella are as shown in Figure 5. Three (3) major clusters (Clusters I, II and III) were observed and were traced to the source of isolates as described in Table 5. A total of 18 isolates clustered in cluster I, while in cluster II (17) and cluster III (20). 7 Cluster III recorded the highest distribution of Salmonella isolates. Table 4 presents the percentage distribution of Salmonella based on sample source and antibiotics resistance clustering patterns. The percentage distribution of Salmonella isolates based on antibiotics resistance clusters ranged from 15% to 73.3%. The distribution of resistant strains in cluster I ranged from 16.6% to 73.3% and the largest proportion were from indigenous chickens (11; 73.3%). In clusters I and III, respectively, isolates from layers (22.2%; 35%) and indigenous chickens (73.3%; 50%) had the highest relatedness in terms of antibiotic resistance patterns as opposed to isolates from broilers. However, there was no significant difference between isolates from layers, clustered in clusters II (4; 23%) and I (4; 22.2%) at P≥0.05. The percent distribution of Salmonella isolates from indigenous chickens ranged from 29.4% to 73.3%. As shown in Table 5, there exists a positive correlation in the antibiotic-resistance patterns of Salmonella isolates from layers and broilers while a negative correlation was obtained against indigenous chickens. Discussion The morphological characteristics observed in this study support the previous observation contained in the WHO Global Salm-Surv as described by Hendriksen et al. 25 The triple sugar iron test of the presumptive Salmonella isolates depicts their ability to utilize lactose, saccharose and dextrose sugars. A positive to indole test depicts the ability of isolates to utilize amino acid in the form of tryptophan to produce the enzyme indole. However, some isolates showed a positive reaction to urease and indole which contradicts the expected results stipulated in the Bergey's Manual of Determinative Bacteriology. Nevertheless, a similar variation has been reported by Shan et al. 26 However, the observation in this study might be due to a shift in the nutrient utilisation pattern of Salmonella as a result of ecological stress emanating from competition for food and other stress inducers. Hence, biochemical characteristics might not be adequate to effectively discriminate a microbial community, hence the need for the use of more reliable approaches such as the molecular techniques. The PCR discrimination method was effective in the discrimination of Salmonella spp. as opposed to the use of biochemical and morphological characteristics as obtained from this study. Therefore, the polymerase chain reaction could present a rapid, sensitive and reliable method for pathogen detection. The positive amplification of the invA genes in Salmonella isolates supports the findings of previous authors on the presence of invasive genes in Salmonella. 27 Demonstrated the presence and functionality of invA, B and C genes which are regions of high similarity in diverse Salmonella serovars except in Salmonella arizona in which invD gene regions were detected. The presence of invA genes has been reported in Salmonella isolates from broiler chickens in Iran, 28 poultry, pigs, humans and other food commodities in Brazil. 29 The use of invA genes has been regarded as the most reliable in Salmonella discrimination since many possess the invA gene within their genomes. Therefore, it is pertinent in the tracking of the pathogenesis of Salmonella-borne infections in animals and humans. The virulence of Salmonella in hosts has been linked to their ability to invade the epithelial tissues. On ingestion, Salmonella attaches itself to the intestinal mucosa lining contributing to a decrease in the pH of the gastrointestinal tract, thus causing an irritation. Invasive Salmonella species could deplete the mucosa layer by penetrating through the M cells overlying the Peyer's patches. 30 In some patients, this situation may progress to a systemic infection resulting from the invasion of the intestinal lymphoid follicles by Salmonella strains which presents clinical signs associated with drained mesenteric lymph nodes. From the study, some isolates were found not to possess the invA gene, thus implying their inability to cause infections in hosts. However, the occurrence of the invasive Salmonella isolates among the chicken samples within the Mafikeng Community suggests that consumers and other stakeholders within the food and value chain might be at a risk of Salmonella-borne infections. This can hamper the safety and health of both veterinary and humans and the socioeconomic status of the people living in Mafikeng community, South Africa. Salmonella species such as Salmonella Bovismorbificans was isolated from this study as opposed to the previous report of it been found only in humans. 31 Many diverse Salmonella strains identified in this study have been implicated to possess the extended spectrum of the β-lactamases (ESBLs) enzymes coding for antibiotic-resistant genes and have been linked to salmonellosis in humans. [31][32][33][34] The isolation of these strains from chickens may have resulted from human-to-animal interaction along the available interfaces such as contaminated feed, water, handling, infected hosts (rodents) and animal care personnel on the farms. The dominance of Salmonella enterica species in chickens supports the findings of 35 who reported a high occurrence of Salmonella Typhimurium compared to the serotype Salmonella Enteritidis in a study conducted in the Democratic Republic of Congo. Also, the isolation of Salmonella Heidelberg, Salmonella Koessen, Salmonella Pullorum and Salmonella Gallinarum 36 has previously been isolated from poultry justifying that aves (poultry) are reservoirs of Salmonella. 34 Salmonella Koessens, Salmonella Pullorum and Salmonella Gallinarum have been linked to salmonellosis originating from eggs. These Salmonella serovars constitute a threat to food safety and are capable of causing human sicknesses. Variations in Salmonella serovars occurrences have been reported in different countries and said to be a function of geographical location. [36][37][38] The typhoidal groups have been implicated as the most cause of ill health in the developing countries. 39,40 However, the nontyphoidal Salmonella serovars (Salmonella Typhimurium and Salmonella Enteritidis) are regarded as the major causes of salmonellosis outbreak in developing countries like India, Iran and many sub-saharan Africa. 41 Hence, the isolation of virulent strains of Salmonella in chickens which happens to form a major part of South African cuisines brings a concern to food security and safety of consumers. The bootstrap values within the evolutionary trend were higher than 70%, which supports the previous report of Wayne et al 42 on decision of a close relatedness between organisms. However, the disparity in homology between CP012144 and other isolates could be due to mutation and development of new traits as evolution proceeds. However, CP012144 was comparable to Salmonella enterica subsp. enterica (MG663470) isolated in this study confirming the sharing of the same ancestral origin. The development of new traits having a 99% homology in isolates MG663482, MG66347, and MG663480 was observed showing that they all evolved from the same ancestor. This corroborates the previous report that Salmonella strains evolved from two broad genus Salmonella enterica and Salmonella bongori. 43 The bootstrap clustering of MG663495 with S. bongori could be due to evolutionary traits that are not pronounced in the isolates. However, the high bootstrap values within the Salmonella enterica group are indicative of high genetic relatedness and reliability of traits developed which cannot easily disappear or wiped out overnight. 44 The bootstrap values of Salmonella isolates and control strains in this study were higher than 50% which showed a high level of repetitive clustering within the isolates. This supports the findings of Soltis and Soltis 45 on the acceptable bootstrap value (100-70%) in the construction of phylogenies. The observation of 100% bootstrap values of isolates as shown in the phylogenetic tree showed that 100% level of repetition exists in the genome compared. Ampicillin belongs to the group of aminopenicillins which are often administered in the treatment of diseases caused by the gram-negative pathogenic enteric bacteria. Gentamycin belongs to the group of aminoglycosides alongside streptomycin, but in this study, gentamycin showed effectiveness in the control of Salmonella. Gentamycin has been reported to have a higher sensitivity on Salmonella strains compared to other antibiotics used in previous studies. 46 This might be due to the fact that gentamycin does not fall within the most common antibiotics administered in the treatment of Salmonella caused infection. However, it must be noted that an uncontrolled use of these antibiotics could also lead to a build-up of resistance to these antibiotics. All isolated Salmonella bongori strains had resistance to ciprofloxacin and nalidixic acid which belong to the fluoroquinolones often regarded as the last resort in Salmonella infection treatment. Similar reports have been made on the isolation of fluoroquinolone-resistant Salmonella in Taiwan. 47 Nalidixic acid is a new generation of the fluoroquinolones often prescribed in the treatment of Salmonella-caused infections. This finding is concurrent with 48 reports on multiple antibiotics resistance of Salmonella isolates from poultry in India and Egypt, respectively. Higher antibiotic resistance phenotype profile was observed in Salmonella Typhimurium as opposed to the previous report of its penta-antibiotic resistance phenotype profiles. The increased antibiotic resistance obtained in this study could be due to misuse of antibiotics, thus resulting in adaptation and change in the antibiotic sensitivity behaviour of this pathogen with the aim to survive stress condition within the eco-system. Isolation of multiple antibiotics resistance strains in indigenous chickens calls for concern as it is believed that this breed of chickens is not often administered antibiotics during ill health. Albeit, the occurrence of multi-drug resistance could be due to the effect of a possible lateral gene transfer within the ecological niche. With regard to sample source, the occurrence of Salmonella multiple-antibiotics resistance followed the order (layers ≤ broilers ≤ indigenous chickens). The resort to the prolonged use of fluoroquinolones in the treatment of Salmonella-borne infections has led to many cases of antimicrobial resistance globally. 49 This resistance has been explained to be caused by mutations of the gyrase DNA gene and change in the efflux pump which is a target for the fluoroquinolones. 50,51 However, Lauderdale et al 47 have suggested the use of the extended spectrum of cephalosporins as the last resort in the treatment of Salmonella infections. Hence, there is the need to develop an effective therapeutic approach in the control of these evolving virulent Salmonella strains. Furthermore, a negative correlation exists between the antibiotic-resistant profiles of Salmonella isolates from broilers, indigenous and layer chickens. The Pearson partial correlation was significant at p≤0.01. A positive correlation shows closer similarities in the antibioticresistant patterns of different Salmonella isolates in the study area, while a negative correlation within the sample source is implicative of non-source-dependent profiles. The high occurrence of the antibiotic-resistant Salmonella strains from indigenous chickens could be due to a pick-up of virulence determinants from the environment or through interaction hosts such as rodents and livestock whom they share feeding and drinking troughs. Also, the high percent distribution of antibiotic-resistant strains in the broilers as shown in the clustering profiles indicates that isolates do share the same antibiotic resistance histories. According to Forshell et al, 52 the abuse and misuse of antibiotics is a major cause of increasing antibiotic resistance among microorganisms of public health significance such as Salmonella. During processing or dressing operations, care for poultry, situations of gastrointestinal content shedding could arise which could lead to spillages of gut content into the environment. Also, unhygienic practices on farms and processing industries could aid in the transport of these virulent strains to the public either through the release of untreated effluents into river channels or other water bodies. These water bodies form a major resource for livelihood among the rural dwellers. Conclusion From this study, it is reported that the similarities in the antibiotic-resistant patterns among isolates from broilers, layers and indigenous chickens reveal similarities in antibiotic exposure histories. It is therefore suggested that there is need to sensitize farmers to adhere to prescribed guidelines on the use of antibiotics. In addition, the implementation of good sanitation among farm workers as well as standard operating procedures in farms where animals are housed should be encouraged to curb the spread of multi-drug-resistant strains of Salmonella since the latter may pose a threat to public health. The detection of large proportions of diverse multi-antibiotic-resistant Salmonella strains in chickens within Mafikeng community, especially in indigenous breed, indicates that these animals may pose a threat on food security and safety. Further studies on the antibiotic-resistant genes harboured by this pathogen could advance knowledge in the development of suitable antibiotics and other prophylaxis to curb Salmonella-caused infections.
2019-10-31T09:10:51.621Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "9da76d52d53460b13db9c155e8e7fd01e2348d82", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=53516", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a9eda51b2821e1264e0ac7233e35c747f5d8dae", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235520228
pes2o/s2orc
v3-fos-license
Molecular response in long-term monitoring of patients with chronic myelogenic leukemia (CML) on nilotinib therapy Abstract Targeted therapy with tyrosine kinase inhibitors (TKIs) was introduced for the suppression of BCR-ABL fusion protein in chronic myelogenic leukemia (CML) more than 50 years ago. The introduction of TKI was an elegant way to gain control over the disease and opened a horizon to the development of new and safer drugs to better manage the disease progression in CML patients. Although Imatinib (the first TKI) was highly effective for the treatment of CML, the lack of response in some patients and the development of resistance led to the introduction of a second-generation TKI, like Nilotinib (Tasigna®). This study aimed to show the molecular response to Nilotinib after a long-term monitoring of CML patients. We analyzed the molecular response rate of twenty-seven CML patients admitted to the Hematology Ward. All of them were subjected to multiple control measurements of Philadelphia chromosome once in at least three months’ period between January 2017 and March 2020. All patients showed a remarkable decrease in the level of the fusion gene’s expression. The highest drop in the expression level was detected after the first dose admission. Some of the patients even reached an expression of BCR-ABL below the detection limit and it was maintained stably low during further examinations. In our patients with CML, Nilotinib resulted in prolonged and deep molecular response. The expression level of BCR-ABL fusion gene showed a substantial decrease immediately after Nilotinib treatment, revealed as early as the third month of therapy. Introduction Imatinib resistance is present in approximately 50% of patients with chronic myelogenic leukemia (CML), which could be due to mutations in BCR-ABL kinase domain [1][2][3]. Currently, about one third of patients with CML in chronic phase experience an unsatisfactory therapeutic effect with Imatinib [4,5]. The main disadvantages of Imatinib treatment of CML -the occurrence of resistance and lack of response in some patientswere meant to be overcome with the introduction of a second-generation tyrosine kinase inhibitor (TKI). Nilotinib (Tasigna ® ) was approved in 2007 for Philadelphia positive (Ph+) CML cases as a more potent and highly effective TKI revealing better effects as a first-line treatment of CML [6]. The long-term aim of targeted treatment is to maintain/stabilize the disease in the chronic phase before it progresses to the accelerated phase and blast crisis which will eventually lead to a critical decrease in the survival time of only several months [7]. That is why Nilotinib is applied generally in adults who are CML resistant and/or intolerant to Imatinib therapy [8]. According to reported data, it is 20-30 times more potent [9] and generally well tolerated with manageable adverse events [10]. However, Nilotinib is associated with a reduction in some biochemical parameters, such as magnesium and phosphate, and elevation in serum lipase, glucose and bilirubin [11]. Approximately half of patients on second-line TKI therapy will have incomplete suppression of the Ph + clone in the bone marrow, usually without evidence of overt disease progression [2]. That is why it is so important to monitor those patients at regular time intervals. Eghtedar et al. [12] reported that, after a median follow-up of 23 months, 21% of all treated with nilotinib (117 individuals) discontinued therapy. The most common reasons for that were toxicity; resistance in chronic phase; transformation to blast crisis [12]. Nevertheless, a significant number of patients who discontinue initial therapy with second-generation TKIs (dasatinib, nilotinib) because of personal preferences or toxicity are already in chronic phase and most probably have a favorable outcome. Very few of the patients discontinue because of resistance and for those SCT or another TKI is offered as a treatment option [12,13]. Our aim in this study was to evaluate the molecular response of patients in chronic phase of CML while representing/evaluating the results of their molecular genetic tests performed at the Clinic of Hematology, university Hospital "St.Ivan Rilski" (Sofia, Bulgaria) for the period from January 2017 to March 2020. Ethics statement All patients signed a written informed consent form to participate in the study. Selection of patients Altogether 27 CML patients were selected for genetic analysis. Their therapeutic regimes included treatment with Nilotinib (second-generation TKI) at a dose of 600 mg daily as a first-line therapy. All monitored patients were in chronic phase of CML, which had been confirmed previously (upon their first admission to the Clinic of Hematology). Their molecular tests were subsequently repeated to monitor the level of the BCR-ABL transcript and estimate their molecular-genetic progress. Results with at least two following measurements of the fusion gene for a total period of four months were included for statistical analysis. Five patients met these minimum criteria, the other twenty-two patients underwent molecular genetic tests between three and six times for an average period of 15.5 months (7-32 months) ( Table 1). Patients in acute phase of CML; with only one molecular-genetic measurement and/or a total monitoring period of less than four months were not considered in this study. Molecular-genetic analysis For all patients the molecular response was assayed based on the expression level of BCR-ABL mRNA in peripheral blood samples taken in K2 EDTA collection tubes. The test was carried out upon admission of each patient at the Clinic of Hematology and subsequently, at least at three months' intervals as a control monitoring test. quantitative results were first normalized against a reference gene such as ABL [14]. The level of mRNA transcripts of ABL were used as an endogenous control. The test was performed automatically by Cepheid geneXpert ® platform (RNA isolation and quantification are performed in the geneExpert cassette) and the result was given as a %BCR-ABL/ABL (IS) as previously described [15,16]. The efficiency value of each test is embedded in the barcode of the Xpert BCR-ABL ultra cartridge and the scaling factor is lot-specific ( Figure 1). The difference between the BCR-ABL Ct and ABL Ct (ΔCt) represents the ratio of the two populations of mRNAs and ultimately the fraction of neoplastic cells present [17]. Results Fifteen of our patients with CML (55.6%) reached MMR (Major Molecular Response) (BCR-ABL<0.1%) upon their second molecular test. MR4.0 (Molecular Response) (<0.01%) was achieved in 16 patients (59.3%) at the average time of nine months following treatment. MR4.5 (<0.0032%) was estimated only in patients who were tested at least three times following initial diagnosis (22 patients). In 14 patients (63.6%) altogether MR4.5 was achieved at approximately 9.5 months after the initiation of Nilotinib treatment. Estimating the general distribution of the BCR-ABL expression percentage, neither of the monitored individuals (except one) had reversed to values higher than the results from the previous molecular test (Table 2). Thirteen individuals reached a negative value of the fusion gene expression (48.1%). generally, the expression level substantially dropped down after the first application of the drug. The level of BCR-ABL transcripts showed a decrease immediately after Nilotinib treatment, revealed at the third month's visit of the patient. Discussion The diagnosis of CML is traditionally made either by standard karyotyping techniques or fluorescence in situ hybridization (FISH). As a Ph chromosome cannot be detected in approximately 5% of cases, we used real-time polymerase chain reaction to detect the BCR-ABL gene [2]. It is a more sensitive alternative and provides a quantification of the relative amount of BCR-ABL mRNA in the peripheral blood [2]. The results for each individual patient are expressed as a ratio of BCR-ABL transcript copies to control gene copies and could be possibly converted to an international standard unit [18]. Regular measurement of the BCR-ABL transcript levels could be potentially used for frequent monitoring of CML patients even if it may not fully overlap with the patient's response to treatment [19]. Therefore, we used BCR-ABL mRNA transcript levels expressed in %(IS) as a marker for molecular response in patients. We accepted the complete absence of transcripts (negative result) for a CMR (Complete Molecular Response), and a reduction of 0.1% [BCR-ABL1 on the international scale (IS) <0.1%], for a MMR (Major Molecular Response) [18]. MR 4.0-log reduction from a standardized baseline (MR4.0; BCR-ABL1 <0.01% IS) and MR 4.5-log reduction from a standardized baseline (MR4.5; BCR-ABL1 <0.0032% IS) were also assessed [20]. The early trend in the BCR-ABL/ABL ratio may be clinically useful for the early identification of patients who respond poorly to imatinib treatment [21]. Patients with an EMR (Early Molecular Response) are supposed to have a superior outcome and may continue on therapy, with regular monitoring by real-time PCR. They usually present with a much lower disease burden which makes them more suitable for conventional non-transplant therapy [21]. In the NOVEL study, at 3 months, 20 out of 27 patients achieved EMR (BCR-ABL1<10% IS) [10]. In a study from 2016 Nilotinib resulted in higher rates of early molecular response [22]. In accordance with those results we achieved even higher rates of EMR. In our cohort, 25 out of our 27 patients (93%) presented with BCR-ABL1<10% IS ( Table 2). In the same study [22] at 3 months' time more than half of all patients on Nilotinib achieved a molecular response 4.5. According to Kuo et al. [10], 36.5% of the patients discontinued Nilotinib treatment due to an unsatisfactory therapeutic effect. The cumulative MMR rate by 3 months was estimated to 11.9%, and none of the patients achieved MR4.0 or MR4.5. By 24 months, 56.8% achieved MMR, 16.2% MR4.0 and 7.4% MR4.5 [10]. In our study, 15 of our patients (55.6%) with CML reached MMR (BCR-ABL<0.1%) upon their second molecular test. MR4.0 (<0.01%) was achieved in 16 patients (59.3%) at the average time of nine months following treatment. MR4.5 (<0.0032%) was estimated only in patients who had had the test done at least three times following initial diagnosis (22 patients). Therefore, in our study, MR4.0 and MR4.5 was detected at higher levels compared to the NOVEL study. Similarly to earlier studies [22], 55.6% of our patients (out of 27) with CML reached MMR (BCR-ABL<0.1%) upon their second molecular test. Altogether MR4.5 was achieved at approximately 9.5 months after initiation of Nilotinib treatment in 63.6% of patients. Although the Cepheid System is very sensitive in estimating the expression level of the fusion gene, the negative result could indicate a very low number of the gene transcripts rather than a complete absence of such (Table 2). That is why some laboratories consider an increase of 0.5 log to be insignificant [18]. Nevertheless, the system demonstrates low inter-laboratory variation and clinically demonstrated limit of detection of <4.5-log reduction (0.0032%) [23] and proves adequate for the identification of leukemia cells harboring the BCR-ABL gene fusion from blood samples. In our study, 13 individuals reached a negative value of the fusion gene expression (Table 2). That should be interpreted as a very low amount of the Ph(+) cells below the detection limit of the machine and not complete eradication of the mutational transcripts. This can be confirmed by the subsequent results in those patients as the values return to the detection range of the device. As a whole, in all 27 patients there was a stable decrease in the amount of the expressed transcript, which was maintained from the third month's measurement on. Conclusions In this study the treatment with Nilotinib was well-tolerated and proved effective in achieving molecular response in patients with chronic phase of CML monitored at the Clinic of Hematology at "St.Ivan Rilski" Hospital, Sofia, Bulgaria. Disclosure statement No potential conflict of interest was reported by the authors. Data availability All data (anonymized) that support the findings of this study are available from the corresponding author upon reasonable request.
2021-06-22T17:55:05.950Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3422ee85b0c8fb5ccf647ad9e079a9865b99eded", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13102818.2021.1912639?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "334195a5b484286b33e6315491edf18c4b4a995f", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
56733493
pes2o/s2orc
v3-fos-license
Clinical Management of Pneumonitis in Patients Receiving Anti–PD-1/PD-L1 Therapy CASE STUDY A 48-year-old gentleman with metastatic melanoma currently receiving the cytotoxic T-lymphocyte–associated antigen 4 (CTLA-4) inhibitor, ipilimumab (Yervoy), and the programmed cell death protein 1 (PD-1) inhibitor, nivolumab (Opdivo), returned for evaluation prior to receiving cycle 2. The patient presented with new onset dyspnea and a non-productive cough over the past week, with a temperature of 100.6°F at home on one occasion. He was placed on observation for fever, cough, and shortness of breath. The patient had no previous history of lung disease and was a nonsmoker. Diminished breath sounds were noted on auscultation. However, the patient was without fever or chills, with a heart rate of 101 beats per minute and a blood pressure of 110/75 mm Hg. We obtained a computed tomography (CT) of his chest. The CT demonstrated diffuse ground-glass opacities in his bilateral lower lobes and some minor interstitial thickening of his right middle lobe, possibly suggestive of inflammation or cryptogenic organizing pneumonia. Based on his presentation and CT findings, the patient was initially treated empirically with antibiotics for suspected pneumonia vs. pneumonitis. During the first 12 hours in observation, the patient experienced increasing dyspnea and cough and was admitted to the hospital. Nebulizer treatments were administered with no improvement, so the patient was started on high-dose corticosteroids at 1 mg/kg, and pulmonary and infectious disease consults were ordered. After the administration of steroids, the patient’s cough and breathing improved and he remained afebrile, eliciting a high suspicion for immune-related pneumonitis. The patient then underwent bronchoscopy to rule out other etiologies. Bronchoalveolar lavage was performed, which yielded no pathogenic organisms. The patient was placed on a 3-week course of a high-dose steroid taper, following which immunotherapy was reinstated. Within 4 days he again presented with similar symptoms, was restarted on high-dose steroids, and immunotherapy was permanently discontinued. Currently, anti-PD-1/PD-L1 checkpoint antibodies are approved by the FDA to treat melanoma, non-small cell lung cancer (NSCLC), Hodgkin lymphoma, Merkel cell carcinoma, renal cell carcinoma (RCC), urothelial carcinoma, and head and neck cancers (Boutros et al., 2016;Topalian et al., 2012).Anti-PD-1 antibodies have also shown promise for patients with triple-negative breast cancer and metastatic colorectal cancer with mismatch repair deficiency (Easton, 2017).With positive clinical outcomes and improved understanding of the pathobiology, anti-PD-1/PD-L1 medications continue to be approved by the FDA as first-line monotherapy and combination therapy. Immune checkpoint blockade is associated with unique side effects referred to as immunerelated adverse events (irAEs; Naidoo et al., 2015).A severe, potentially life-threatening irAE associated with immunotherapy is pneumonitis, which may develop at any time.Pneumonitis is defined as a noninfectious inflammation to the lung parenchyma (Naidoo et al., 2015(Naidoo et al., , 2017;;Nishino et al., 2016b;Wu et al., 2017).Generally, irAEs with anti-PD-1/PD-L1 therapy occur less frequently compared with anti-CTLA-4 monotherapy; however, pneumonitis occurs more frequently in patients receiving anti-PD-1/PD-L1 therapy compared to CTLA-4 inhibitors (Boutros et al., 2016;Friedman, Proverbs-Singh, & Postow, 2016).A recent metaanalysis by Nishino and colleagues (2016a) evaluated 20 studies involving approximately 4,500 patients participating in PD-1 inhibitor clinical trials and found the overall incidence for pneumonitis ranged from 0% to 10.6%.Furthermore, combination therapy with other checkpoint inhibitors or therapies (such as chemotherapy and targeted therapies) with a known risk of pulmonary adverse events has been shown to increase the occurrence of pneumonitis (Boutros et al., 2016;Naidoo et al., 2015). CLINICAL SYMPTOMS OF PNEUMONITIS With all anti-PD-1/PD-L1 agents, fatigue, pyrexia, chills, flu-like symptoms, and infusion reactions are typical (Eigentler et al., 2016;Naidoo et al., 2015;Weber et al., 2015).The challenge for advanced practice providers (APPs) is to distinguish expected side effects from severe adverse events, and evaluate differential diagnoses, such as pneumonitis, pneumonia, and cryptogenic organizing pneumonia (COP), all of which require different treatments. Patients with immune-related pneumonitis may experience additional irAEs, including dermatitis, colitis, and endocrinopathies, such as hypophysitis and thyroiditis (Eigentler et al., 2016;Weber et al., 2015).Research by Naidoo and colleagues (2017) reported that 58% of patients (n = 43) diagnosed with pneumonitis at two institutions experienced other immune-related adverse events, with skin rash being the most common (n = 8).This is important because APPs treating patients with immunotherapy need to understand that more than one irAE can occur at the same time (Weber et al., 2015). DIAGNOSIS The Common Terminology Criteria for Adverse Events (CTCAE) version 4.03 is the standard classification and severity grading scale for adverse events in cancer therapy, clinical trials, and oncology settings based on clinical symptoms and objective findings (National Cancer Institute and National Institutes of Health, 2009).It provides the framework for toxicity grading of irAE symptoms, which can then be used to follow management algorithms (Eigentler et al., 2016;Michot et al., 2016;Mistry, Forbes, & Fowler, 2017;Naidoo et al., 2015).The definition of pulmonary toxicity according to the CTCAE is as follows: Grade 1 is defined as when the patient is asymptomatic with no chest image finding, grade 2 for mild presenting symptoms that limit the patient's activities of daily living, grade 3 for worsening or severe symptoms that limit self-care, and grade 4 for life-threatening symptoms (National Cancer Institute and National Institutes of Health, 2009). Chest computed tomography (CT) is preferred over a standard chest x-ray to aid in the diagnosis of pneumonitis (Nishino et al., 2016a).Radiographic findings on chest CT, in addition to clinical symptoms (i.e., new onset dyspnea, cough), aid in toxicity grading (Nishino et al., 2016a).In a retrospective study of 20 patients with anti-PD-1induced pneumonitis, CT findings showed more extensive pneumonitis in the lower lobes compared to the middle and upper lobes (Nishino et al., 2016a).Although immunotherapy (CTLA-4, PD-1/PD-L1) has been associated with sarcoidlike pulmonary changes including lymphadenopathy, radiographic imaging can present with varied radiographic findings (Chuzi et al., 2017).Among the specific CT findings, ground-glass opacities (GGOs) were present in 13 of the 20 (65%) patients, and all patients presented with GGOs in a COP pattern (Nishino et al., 2016a). There is some debate as to whether a diagnostic bronchoscopy is required prior to initiation of treatment to visualize inflammation and to rule out infection (Eigentler et al., 2016;Naidoo et al., 2015;Weber et al., 2015).Furthermore, there is no set standard for when to perform bronchoscopy to diagnose pneumonitis.Bronchoalveolar lavage (BAL) obtained via flexible bronchoscopy can provide information to the clinician about infectious, inflammatory, and immunologic processes at the alveolar level through analysis of the BAL fluid by cell counts, cultures, and various histochemical tests (i.e., human herpesvirus 6 [HHV-6], vesicular stomatitis virus [VSV], cytomegalovirus [CMV]; Meyer et al., 2012).Because the diagnosis of pneumonitis is one of exclusion, high-dose steroids can be used to distinguish inflammation from infection in patients who are not responding to antibiotics (Wu et al., 2017). CLINICAL MANAGEMENT Depending on the toxicity grade, patients with immunotherapy-induced pneumonitis are generally treated with high-dose corticosteroids, with a median treatment time of 4 to 6 weeks (Diamantopoulos et al., 2017;Michot et al., 2016;Mistry et al., 2017;Nishino et al., 2016a).Table 1 identifies the list of authors who have published management algorithms for the diagnosis and treatment of immunotherapy toxicities, including pneumonitis.For patients with grade 1 toxicity, anti-PD-1/ PD-L1 treatment is held until the chest CT findings resolve.Often, no intervention is needed until patients have grades 2 to 4 toxicity.Patients with grade 2 toxicity are considered at moderate severity and treated with oral prednisone at 1 mg/kg/ day; those with higher toxicity (grades 3/4) require 2 to 4 mg/kg/day of intravenous methylpredisone (Eigentler et al., 2016;Mistry et al., 2017;Naidoo et al., 2015;Nishino et al., 2016c;Weber et al., 2015).The algorithms indicate that the treatment of patients with grade 3 (severe) or grade 4 (life-threatening) toxicity should include infectious disease and pulmonary consultations for further evaluation and bronchoscopy (Eigentler et al., 2016;Michot et al., 2016;Mistry et al., 2017;Naidoo et al., 2015).Table 2 summarizes the treatment and management of pneumonitis by grade and describes interventions needed to treat pneumonitis, when to hold or discontinue immunotherapy treatment, duration of treatment based on grade, and follow-up recommendations. DISCUSSION Pneumonitis typically develops within 8 weeks after the initiation of therapy (Nishino et al., 2016c). It is important for APPs to be aware of the possibility that pneumonitis can develop any time after the initiation of therapy and to be vigilant for the presenting symptoms.The patient outlined in the case study developed pneumonitis initially at week 2, and on reinitiation of therapy, developed pneumonitis within 4 days.As with the patient described in the case study, findings on CT imaging may be erroneously inter-preted as tumor progression or infectious pneumonia.Lung events, such as pneumonitis, are often the main reason for the discontinuation of anti-PD-1/PD-L1 therapy (Michot et al., 2016).As a result, an increased awareness by radiologists and APPs is necessary to adequately diagnose pneumotoxicity related to the use of immunotherapy. Treatment and follow-up of irAEs present a challenge in the immuno-oncology practice.Treatment algorithms for irAEs are empiric in approach and are often determined by practice settings and organizations.Some practice settings have established a consensus on the treatment of pneumonitis, leading to institutional guidelines.However, no prospective clinical trials have been identified that determine an optimal treatment approach for the management of pneumonitis and other irAEs.In February 2018, new guidelines for the management of irAEs in patients treated with immune checkpoint inhibitor therapy were published by the American Society of Clinical Oncology after a systematic review by a multidisciplinary, multiorganizational panel of experts (Brahmer et al., 2018).The recommendations for the management of pneumonitis addressed in the guidelines advise clinicians to hold immunotherapy until the patient's pneumonitis is grade 1 or less, and permanently discontinue immune checkpoint inhibitor therapy for any patients experiencing a grade 3/4 toxicity (Brahmer et al., 2018).The National Comprehensive Cancer Network (NCCN) also provides immunotherapy teaching and monitoring tools that can be utilized by patients and providers to monitor known toxicities seen with immunotherapy (NCCN, 2017). As novel biologic immunotherapy agents continue to emerge as the gold standard for the treatment of cancer, there is the potential for in- creasing rates of adverse events.Current guidelines rely on expert consensus to address irAE management; therefore, continued research on the monitoring, diagnosis, and treatment of immunotherapy toxicities is needed to strengthen the recommendations. Implications for Advanced Practice Providers The case study illustrates the difficulty in diagnosing and managing immunotherapy-induced pneumonitis.Clinicians need to be mindful of the pneumonitis risk with anti-PD-1/PD-L1 agents and factors that may increase a patient's risk (combination therapy, solid tumors [NSCLC, RCC], smoking, age), evaluating any new symptoms as treatment related.As front-line clinicians, APPs are positioned to identify such toxicities in their patients because they often see patients at each visit and can recognize new symptoms and subtle changes.Clinical education is needed for providers caring for patients receiving immunotherapy to identify, grade, and manage the various toxicities.Although national guidelines have not been adopted, algorithms have been developed to aid in the management of these patients.In addition, NCCN has provided a robust immunotherapy teaching tool that APPs can utilize to educate patients for early detection of toxicities.l Table 1 . Pneumonitis Management Algorithms Author Algorithm Brahmer et al. (2018)Management of Immune-Related Adverse Events in Patients Treated With Immune Checkpoint Inhibitor Therapy: American Society of Clinical Oncology Clinical Practice Guideline
2018-12-24T14:18:09.530Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "ac5274f9e8c38ca8ebed5085187e332649df57e5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9dbcee9ed9a65714d791e2bddd30977193427657", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
81678218
pes2o/s2orc
v3-fos-license
Effects of Treadmill Training on Muscle Oxidative Capacity and Endurance in People with Multiple Sclerosis with Significant Walking Limitations DOI: 10.7224/1537-2073.2018-021 © 2019 Consortium of Multiple Sclerosis Centers. Multiple sclerosis (MS) is an autoimmune disease that causes demyelination of axons in the central nervous system. Multiple sclerosis is associated with various cognitive and physical impairments, with declines in mobility being reported as one of the most common symptoms of the disease.1,2 Moreover, reduced mobility is accompanied by decreases in physical activity and physiological deconditioning. which may contribute to the progression of physical disability in people with MS.3-5 Physiological deconditioning in MS is characterized by declines in exercise capacity, alterations in muscle phenotype, and reduced muscle function.6-10 Indeed, previous studies have shown that moderate-to-severe levels of disability are associated with a 15% to 30% reduction in aerobic capacity (VO2peak) and a 30% to 50% decrease in muscle strength compared with mild Effects of Treadmill Training on Muscle Oxidative Capacity and Endurance in People with Multiple Sclerosis with Significant Walking Limitations Participants Criteria for enrollment included a diagnosis of MS, approval for participation in exercise training by a physician, an Expanded Disability Status Scale (EDSS) score of at least 6.0, and older than 18 years. Demographic information and type of MS were assessed using a selfreport questionnaire, and level of disability was evaluated using the self-administered EDSS. 24 The EDSS is commonly used to characterize disability in people with MS, and, by definition, those with EDSS scores greater than 6.0 have substantial ambulatory limitations and require the use of assistive devices. This study was approved by the Research Review Committee at Shepherd Center (Atlanta, GA), and all the participants gave written informed consent before participation. Exercise Training The antigravity treadmill training was performed using the AlterG Anti-Gravity Treadmill system (AlterG Inc, Fremont, CA) and consisted of 16 sessions over approximately 8 weeks. Sessions were held on nonconsecutive days, two per week. Training sessions were facilitated by physical therapists and trained research personnel. Sessions included a 2-minute warm-up, a 20-minute exercise training period, and a 2-minute cool down. A rest break (3-5 minutes) was provided halfway through the exercise training period. Initial body weight support and treadmill speed were set to 50% and 0.5 mph, respectively. Heart rate and rating of perceived exertion (RPE) were measured at minutes 10 and 20 of the exercise training period. The RPE was assessed using a modified Borg scale (scoring range, 0-10). 25,26 Body weight support (35%-70%) and treadmill speed (0.2-2.5 mph) were adjusted throughout the training program to maintain effort without exceeding an RPE of 8.0. Experimental Protocols Muscle oxidative capacity, muscle endurance, and walking function were measured before the start of the training program and within 10 days of completing the last exercise training session. Muscle Oxidative Capacity Muscle oxidative capacity of the medial gastrocnemius was measured using near-infrared spectroscopy (NIRS) as previously described elsewhere. 27,28 In brief, NIRS signals can be used to measure changes in muscle oxygen saturation, and, during periods of ischemia, the levels of disability in people with MS. 6 Furthermore, VO 2peak , strength, and muscle oxidative capacity are related to walking impairments in MS, suggesting that these aspects of deconditioning may be important targets for rehabilitation interventions. 11-15 While exercise training has been shown to increase VO 2peak in people with MS, measures of whole-body oxygen consumption are influenced by central and peripheral factors and do not directly evaluate changes in muscle-specific oxidative capacity. 16-18 Muscle oxidative capacity, or the capacity to produce energy through aerobic pathways, is directly related to muscle endurance, and, therefore, may be important in interventions aiming to improve endurance in people with MS. A recent study demonstrated improved muscle metabolism with electrical stimulation training in persons with MS, 19 but there is little evidence to support the use of voluntary exercise in interventions aimed at improving muscle oxidative capacity. Few studies have evaluated muscle plasticity in individuals with MS who have moderate-to-severe levels of disability. 20,21 People with MS who have moderateto-severe levels of disability have substantial ambulatory limitations and require the use of assistive devices. Therefore, people with moderate-to-severe levels of disability may not easily or safely participate in traditional endurance exercise training interventions (walking over ground, running, cycling, etc). Previous studies evaluating endurance exercise training in populations with limited mobility have used specialized equipment such as recumbent cycles and body weight-supported treadmill systems to create adaptive exercise training programs. 21-23 Body weight-supported treadmill training (BWSTT) has been shown to improve muscle strength in people with MS who have moderate-to-severe disability, 21 but the effect of BWSTT on muscle oxidative capacity and endurance has not been evaluated. A novel form of BWSTT using an antigravity treadmill provides a way for people with significant disability to exercise safely without the potential discomfort of the overhead harness typically used with BWSTT. The present study aimed to evaluate the effects of antigravity treadmill training on muscle oxidative capacity, muscle endurance, and walking function in people with MS who have moderate-to-severe levels of disability. We hypothesized that antigravity treadmill training will result in improvements in muscle oxidative capacity and muscle endurance and that changes in muscle endurance will be associated with improvements in walking function. total). Muscle endurance was defined as the preservation of muscle contraction intensity during repeated muscle contractions and quantified using an endurance index. The endurance index was calculated as the percentage acceleration measured at the end of each stage of frequency (2, 4, and 6 Hz) relative to peak acceleration. The reference peak acceleration for each frequency was defined as the peak acceleration measured before the end of each stage. Walking Function Walking function was measured using the 2-Minute Walk Test (2MWT) as previously described elsewhere. 34 The 2MWT is a standard measure of the NIH Toolbox for Assessment of Neurological and Behavioral Function, and the reliability and validity of this test as a measure of walking function has been evaluated in several studies. [35][36][37] The 2MWT measures the distance walked during a 2-minute period. Participants were permitted to use an assistive device during the 2MWT. Data Analysis Statistical analysis was performed using IBM SPSS Statistics for Windows, version 23.0 (IBM Corp, Armonk, NY). Measures of muscle oxidative capacity and walking function before and after antigravity treadmill training were compared using the paired t test. Changes in muscle endurance (endurance index) measured at each frequency of electrical stimulation before and after antigravity treadmill training were evaluated using a two-way repeated-measures analysis of variance (stimulation frequency*training). Data were evaluated for normality violations using the Shapiro-Wilk test. Significance was assumed at P < .05. All values are reported as mean ± SD unless otherwise indicated. Results Participant characteristics are listed in Table 1. Nine people with MS were enrolled in the study: seven completed the 16 training sessions, and six participants com-rate of change in NIRS signals reflects the rate of oxygen metabolism. 27,28 The NIRS measures of muscle oxygen metabolism can be used after a short bout of exercise to quantify muscle metabolic capacity. Immediately after exercise, oxygen metabolism is increased to restore intramuscular phosphocreatine stores, which are depleted during exercise, and the rate at which the metabolic rate returns to basal levels (phosphocreatine restored) can be used as an index of muscle oxidative capacity. 29,30 Thus, NIRS can be used to evaluate the recovery of muscle metabolic rate using a series of ischemic periods after exercise. 28 The rates of oxygen metabolism during a series of postexercise ischemic periods can be fitted to the exponential function: y(t) = End − Δ × e −kt . In this equation, End is the end exercise metabolic rate, and the recovery rate constant, k, is used as a measure of muscle oxidative capacity. 28 The NIRS measures of oxidative capacity reflect the capacity of muscle mitochondria to produce cellular-free energy (adenosine triphosphate) using oxygen (oxidative phosphorylation), and this technique has been cross-validated with established in vivo ( 31 P magnetic resonance spectroscopy) 30 and in situ (high-resolution respirometry) 31 assessments of mitochondrial oxidative phosphorylation. The NIRS optical probe (PortaMon, Artinis Medical Systems, Zetten, the Netherlands) was placed over the medial gastrocnemius muscle with the participant in the supine position. Exercise was performed by applying 10 to 30 seconds of electrical stimulation to the gastrocnemius muscle. Intermittent ischemia was created by rapidly inflating a blood pressure cuff just proximal to the knee joint to approximately 100 mm Hg above systolic blood pressure. A series of 18 to 22 blood pressure cuff inflations were performed (5-20 seconds for each cuff) immediately after exercise to measure the recovery of oxygen consumption. Muscle oxidative capacity was quantified as the average rate constant from two recovery tests. 28 Muscle Endurance Muscle endurance was evaluated using accelerometerbased mechanomyography twitch electrical stimulation 32,33 to assess skeletal muscle-specific (peripheral) endurance. Muscle contraction intensity was measured using an accelerometer (WAX9, Axivity Ltd, Newcastle upon Tyne, UK) placed on the skin over the gastrocnemius muscle using double-sided tape. Electrical stimulation was applied at three low-stimulation (twitch) frequencies (2, 4, and 6 Hz) for 3 minutes each (9 minutes Figure 1C-E). Discussion The primary finding of the present study was that antigravity treadmill training improved muscle oxidative capacity in the gastrocnemius muscles of people with MS. Consistent with previous studies, we found that increases in muscle oxidative capacity were accompanied by increases in muscle endurance. 38 The present results suggest that endurance exercise training can induce muscle plasticity in people with MS, even in the presence of moderate-to-severe disability. The antigravity treadmill training used in the present study provided a partial weightbearing aerobic exercise stimulus to the lower-extremity muscles, and we found an approximately 68% increase in muscle oxidative capacity of the gastrocnemius after 8 weeks of training. Indeed, this increase is comparable in magnitude with previous studies reporting approximately 50% improve-pleted all the posttraining testing sessions (Table 1). One participant performed two sessions a week for 9 weeks except for attending only one training session in weeks 5 and 9. Two participants dropped out of the study: one was injured in a car accident, causing a lapse in training, and the other experienced changes in resting blood pressure outside of appropriate study parameters. This participant was evaluated and cleared for training by a managing neurologist and a primary care physician after missing multiple sessions but did not meet the training criteria due to missed sessions. Across all antigravity treadmill training sessions, participants exercised at a mean ± SD of 55.2% ± 17.4% of their maximal heart rate (age-predicted) and reported an RPE of 5.0 ± 1.9. Treadmill walking speed ranged from 0.6 ± 0.2 mph to 1.0 ± 0.5 mph, and body weight support ranged from 36.3% ± 6.8% to 48.6% ± 3.5%. Posttraining measurements were obtained 5 to 9 days after the last day of training. Muscle oxidative capacity increased from 0.64 ± 0.19 min -1 to 1.08 ± 0.52 min -1 (68.2%; P < .05) ( Figure 1A). No significant changes in walking function were found ( Figure 1B). There was a main effect establish a physiological link between metabolic and functional muscle adaptations in this population. We did not find improvements in muscle endurance to be associated with significant improvements in walking function. These findings suggest that the improvements in gastrocnemius muscle oxidative capacity and endurance observed in the present study were not sufficient to improve walking function. Notably, five of six participants improved their 2MWT distance (3.3%-212%), and, although not statistically significant, the calculated overall effect size for the 2MWT was d = 0.31. This effect size is similar in magnitude to values reported in previous studies evaluating the effect of various exercise interventions on walking function in MS (d = 0.2). 50 The exercise protocol in the present study consisted of a training frequency at the lower end of recommended exercise guidelines for persons with MS (~two sessions per week), 51,52 and future studies should evaluate changes in muscle and walking function associated with antigravity treadmill training paradigms of higher frequency. There are several limitations to consider in interpreting the findings of the present study. Primarily, only six participants were tested, and a larger sample size could improve the strength of the findings. However, the present study evaluated a relatively homogenous group of participants and was sufficiently powered to evaluate changes in muscle oxidative capacity (d = 1.1) and muscle endurance (d = 1.5). Based on the effect size calculated for walking function, an additional 65 participants would be needed to achieve the adequate power for this outcome. Considering the degenerative nature of MS, it may also be difficult to interpret the magnitude of change in the outcome measures without ments in NIRS measures of muscle oxidative capacity in nondiseased individuals with 4 weeks (five times per week) of voluntary wrist flexion exercise training. 39 Interestingly, we found a wide range of improvements in muscle oxidative capacity after training (22%-150%), and two participants improved their muscle oxidative capacity to values similar to those previously reported for controls (~1.7 min -1 ). 40 Indeed, previous studies have reported that people with MS have greater variability in gait mechanics compared with controls, [41][42][43] and, therefore, the variability in the magnitude of adaptations measured in the present study may reflect differences in lower limb muscle activation and gait mechanics among participants. Although studies using electromyography have demonstrated reduced gastrocnemius activation during push-off compared with controls, 41,44 the improvement in oxidative capacity observed in the present study suggests that the voluntary activation of the gastrocnemius muscle during the antigravity treadmill training was sufficient to initiate the biochemical pathways required for mitochondrial biogenesis. 45 Reductions in muscle oxidative capacity may be related to walking dysfunction in people with MS, 7,13,40 and the present findings lend support to the use of antigravity treadmill training in interventions aiming to improve muscle oxidative metabolism in people with MS who have significant walking impairments. Previous studies have reported reduced oxidative enzyme activity and declines in type I muscle fiber cross-sectional area in people with MS, 46,47 suggesting a shift in muscle characteristics to favor a more glycolytic, fatigable phenotype. Yet, few studies have evaluated exercise-mediated improvements in muscle-specific endurance in this population. 48 We found that antigravity treadmill training improved muscle endurance by approximately 56% on average in the medial gastrocnemius. The present findings are consistent with previous work reporting a 30% increases in cross-sectional area and a 27% increase in the proportion of type I (fatigueresistant) muscle fibers with lower-extremity endurance training in people with MS. 49 Although changes in fiber type composition were not evaluated in the present study, the observed improvements in muscle oxidative capacity and muscle endurance indicate that exercise training resulted in a more oxidative, fatigue-resistant muscle profile in these participants. These results demonstrate the plasticity of skeletal muscle in people with MS who have significant walking impairments and PRACTICE POINTS • Antigravity treadmill systems use lower-body positive pressure to deliver body weight support, providing an opportunity for people with significant walking impairments to participate in treadmill exercise training. • Antigravity treadmill training can improve muscle oxidative capacity and muscle endurance in people with MS. • Endurance exercise training can induce muscle plasticity in people with MS, even in the presence of moderate-to-severe disability. comparison with a follow-up (detraining) measurement or a nontraining control group. 39 Posttraining measurements were also obtained at variable lengths of time after the last bout of exercise (5-9 days), and the participants with the longest periods between the last training session and testing (8 and 9 days) were among the lowest responders regarding muscle oxidative capacity. Thus, the present results may have underreported the magnitude of improvements in muscle oxidative capacity and endurance in these participants. 39 Although we found robust improvements in oxidative capacity and endurance in the medial gastrocnemius muscle, it should also be considered that these findings may not reflect changes in other lower-extremity muscles. Moreover, previous studies have reported asymmetrical lower-limb muscle function in people with MS, 13,53 and the present study did not evaluate bilateral differences, which may have provided more insight when comparing exercisemediated adaptations in muscle endurance and walking function. Treadmill Training and Muscle Function in MS In conclusion, antigravity treadmill training can improve muscle oxidative capacity and muscle endurance in adults with MS who have moderate-to-severe disability. Further investigation is warranted to establish the role of muscle oxidative capacity and endurance in the rehabilitation of people with MS. o
2019-03-18T14:04:05.485Z
2019-08-23T00:00:00.000
{ "year": 2019, "sha1": "61efa36d279a64a5f98f28e2deb5f3c272ede5eb", "oa_license": null, "oa_url": "https://meridian.allenpress.com/ijmsc/article-pdf/21/4/166/2436447/1537-2073_2018-021.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0ab7e9ef3c7ef33499ba838508c14d486121073b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118415273
pes2o/s2orc
v3-fos-license
Quarkonia and Heavy-Quark Relaxation Times in the Quark-Gluon Plasma A thermodynamic T-matrix approach for elastic 2-body interactions is employed to calculate spectral functions of open and hidden heavy-quark systems in the Quark-Gluon Plasma. This enables the evaluation of quarkonium bound-state properties and heavy-quark diffusion on a common basis and thus to obtain mutual constraints. The two-body interaction kernel is approximated within a potential picture for spacelike momentum transfers. An effective field-theoretical model combining color-Coulomb and confining terms is implemented with relativistic corrections and for different color channels. Four pertinent model parameters, characterizing the coupling strengths and screening, are adjusted to reproduce the color-average heavy-quark free energy as computed in thermal lattice QCD. The approach is tested against vacuum spectroscopy in the open (D, B) and hidden (Psi and Upsilon) flavor sectors, as well as in the high-energy limit of elastic perturbative QCD scattering. Theoretical uncertainties in the static reduction scheme of the 4-dimensional Bethe-Salpeter equation are elucidated. The quarkonium spectral functions are used to calculate Euclidean correlators which are discussed in light of lattice QCD results, while heavy-quark relaxation rates and diffusion coefficients are extracted utilizing a Fokker-Planck equation. I. INTRODUCTION Hadrons containing heavy quarks are widely used to deduce basic properties of the strong force as given by Quantum Chromodynamics (QCD) [1]. Heavy-quark (HQ) systems are also valuable for studying hot and dense matter, in particular for temperatures and quark chemical potentials which are parametrically small compared to the HQ mass, T, µ q ≪ m Q (Q = c, b). Under these conditions, which are believed to encompass phase changes of the medium, HQ momenta (p 2 Q ∼ m Q T ) are large relative to those of the (light) partons (p 2 ∼ T 2 ) constituting the heat bath. This leads to simplifications in the theoretical description which facilitate the task of studying the medium. For example, the prevalence of elastic interactions with spacelike momentum transfers suggest that a potential picture for HQ interactionssuccessful in vacuum -may remain valid in the medium. The modifications of heavy quarkonia (charmonium and bottomonium) in the medium are believed to reveal quark deconfinement in the Quark-Gluon Plasma (QGP) as produced in ultrarelativistic heavy-ion collisions (URHICs), cf. Refs. [2][3][4] for recent reviews. In addition, open heavy-flavor particles are being utilized to extract transport properties of the medium formed at the Relativistic Heavy-Ion Collider (RHIC), by computing their diffusion coefficient in the QGP, see, e.g., Ref. [5] for a recent review. In Ref. [6] it was suggested that the (in-medium) forces responsible for heavy-quarkonium binding may be closely related to those governing the * e-mail: friek@comp.tamu.edu † e-mail: rapp@comp.tamu.edu diffusion of an individual heavy quark in the QGP. The basis for this idea is that the exchanged 4-momentum, k = (k 0 , k ), for the elastic scattering of a (slow) heavy quark is essentially "static", i.e., the energy transfer is parametrically smaller than the 3-momentum transfer, k 0 ≃ k 2 /2m Q ≪ | k |, for both an individual heavy quark and a quarkonium bound-state. A possible consequence of such a connection could be that a "strong" interaction in the medium, which binds charmonium states above the critical temperature, at the same time leads to strong correlations in the heavy-light sector, which accelerate HQ thermalization compared to perturbative scattering [6]. The purpose of the present paper is to set up and carry out a framework which enables a systematic investigation of this connection. This requires to evaluate in-medium bound and scattering states on equal footing, which will be realized by employing a thermodynamic T -matrix approach [7,8]. To improve the reliability of this framework, several constraints will be elaborated: input potentials will be adjusted to reproduce the HQ free energy computed in lattice QCD (lQCD) in vacuum and at finite temperature, empirical vacuum spectroscopy in the bound-state regime and perturbative QCD in the highenergy scattering limit will be checked, and euclidean correlators for heavy quarkonia in medium will be tested with lQCD results. In the vacuum, the description of heavy quarkonia within a potential framework can be made rigorous as an effective field theory of QCD with heavy quarks, socalled potential non-relativistic QCD (pNRQCD) [1]. In a hot and dense medium, however, additional scales enter the problem (e.g., temperature, T , and Debye screening mass, m D ) rendering the extension of the potential concept more involved, especially in a strongly interacting system where it is difficult to establish scale hierarchies (for perturbative treatments based on T ≫ m D ∼ gT , see, e.g., Refs. [9][10][11]). On the other hand, nonperturbative information on the HQ interaction over a wide range of temperatures is available from thermal lQCD in terms of the free energy, F , of a static QQ pair (strictly speaking, the difference in free energy of the system with and without the HQ pair) [12][13][14]. In practical approaches, the color-singlet free energy, F 1 (or the pertinent internal energy, U 1 = F 1 − T ∂F 1 /∂T ) has been utilized as a potential in Schrödinger [15][16][17][18] and T -matrix [7,8] equations, and the resulting spectral functions have been checked against lQCD results for euclidean correlation functions. While these approaches suggest that the potential model provides a viable tool at finite temperature, several open issues remain, e.g.: (i) the use of free or internal energy (or even combinations thereof [19]); (ii) the gauge dependence of the color-singlet free energy [20]; (iii) microscopic insights into the screening mechanisms (e.g., color-Coulomb vs. confining forces); (iv) corrections to (or even validity of) the potential ansatz. In the present paper we do not offer new principle insights on item (i). To cover the uncertainty associated with this problem, our calculations will be carried out for both free and internal energies which are believed to bracket the limiting cases within their interpretation as a finite-temperature HQ potential. To address items (ii) and (iii) we adopt a recently proposed field-theoretic ansatz [21,22] to describe HQ free energies using a screened Coulomb plus "confining" gluon propagator. These propagators require four input parameters (coupling strengths and screening masses) which are adjusted to reproduce gauge-invariant color-average free energies from lQCD. Color projections are extracted within the model and utilized to compute color-singlet quarkonium and heavy-light spectral functions, as well as colored correlations which contribute to the transport of heavy quarks in the QGP. Special care is taken of relativistic effects -especially for light quarks -which in our framework is possible once the vector and/or scalar nature of the Coulomb and confining force is specified. For example, the relativistic Breit correction known from electrodynamics [23] naturally emerges as a relativistic effect in the Coulomb potential. We will furthermore check the static approximation underlying the potential picture, by comparing different versions of the 3dimensional reduction scheme to obtain the T -matrix from an underlying Bethe-Salpeter equation. This article is organized as follows. In Sec. II we set up the microscopic model used to fit lQCD free energies. In Sec. III we recollect main elements of the thermodynamic T -matrix formalism, including relativistic corrections. Section IV is devoted to the discussion of our numerical results for quarkonium and heavy-light spectral functions and their applications to euclidean correlators and HQ relaxation times, respectively. We conclude in Sec. V. II. MICROSCOPIC MODEL FOR THE HEAVY-QUARK POTENTIAL The recent revival of potential models to describe heavy quarkonia in medium has been largely driven by the prospect of a paremeter-free input via static HQ free energies computed in thermal lQCD. However, functional fits to the lattice "data" usually do not offer much insight into the physical mechanisms underlying the medium effects in the potential, nor do they allow to define vertex structures of the interaction which become important at higher energies and/or in different color channels. In addition, it is desirable to base the starting point on a gauge-invariant quantity, i.e., the color-average free energy 1 . To this end, we adopt the microscopic model developed by Megías et al. [21,22] which we briefly review in the following and then fit to lattice data. The key idea underlying this model is that the HQ free energy can be accounted for by a nonperturbative ansatz for the gluon propagator giving rise to a string-like confining term in coordinate space, plus a "standard" perturbative term corresponding to a screened color-Coulomb interaction [21], to be understood in static gauge and within dimensional reduction (m D ,m D : screening masses). The leading nonperturbative effect in the gluon propagator is associated with a dimension-2 gluon condensate dictated by dimensional considerations, with m 2 G a "glueball" mass. A priori, a dimension-2 gluon condensate is gauge-dependent and as such a somewhat controversial quantity. Since the gluon propagator is a gauge-dependent quantity the appearance of gaugedependent terms is inevitable. However, it has been argued by several authors [24][25][26][27][28] that a dimension-2 condensate encodes nontrivial gauge-invariant information, e.g., topological configurations associated with magnetic monopoles giving rise to a static confining force (which is precisely the effect modeled in the present context). Evidence for a dimension-2 condensate contribution has also been found in QCD sumrules (see Ref. [29] and references therein). To establish the connection to the HQ free energy (given by a correlator of Polyakov loops, Ω), one starts from its perturbative expression at finite temperature in the color-singlet channel [21], with the perturbative gluon propagator The separation-independent term, A 2 0,a (x) , in Eq. (3) plays the role of a selfenergy of an individual heavy quark. The main assumption consists now of augmenting the perturbative propagator by the nonperturbative one as given by Eq. (1). Assuming further that the same functional dependence as in Eq. (3) holds in the nonperturbative case we are lead to the following form of the singlet free energy (N c = 3): In Ref. [21] this approach has been applied to study the Wilson loop and HQ free energy in quenched QCD at finite temperature, and efficiently reproduces pertinent lQCD data. As an extension of this treatment we allow for different screening masses in the perturbative and nonperturbative parts of the gluon propagator which improves the precision in our fits to unquenched lQCD data. As indicated above we fit the color-average free energy. Since we aim at a parametrization over a large range in distance and temperature we employ the definition [30] F av (r, T ) = −T ln 1 9 e −F1(r,T )/T + 8 9 e −F8(r,T )/T without further approximations which automatically ensures the correct behavior in the limits rT >> 1, where the potential is dominated by 2-gluon exchange, F av. (r, T ) ≈ (F 1 (r, T )) 2 , and rT << 1, where we have F av (r, T ) ≈ F 1 (r, T ) + T ln(9) [31]. The evaluation of Eq. (6) requires the color-octet and -singlet free energies (in addition we will use the extracted potentials in the sextet and anti-triplet channels for the calculation of HQ relaxation times). In previous works [6,8] these potentials have been approximated by Casimir scaling of the singlet potential. While this is a good approximation for the short-range (perturbative/Coulombic) part of the potential [31,32], it presumably does not apply to the confining part which rather appears to be universal, including its long-distance limit [30,33,34] 2 . We therefore apply Casimir scaling only to the Coulomb part of the model gluon propagator while the string part is assumed to be color-blind, i.e., the same in all color channels 3 . This is also compatible with the interpretation of the long-distance limit as an individual HQ selfenergy, as discussed below. The coordinate-space potential in a color-channel a takes the form with the Coulomb part F C a , the nonperturbative string part F S , and a r-independent part, F ∞ (T ), which will be associated with a (real part of the) HQ "selfenergy", Σ R Q , below. Similar analytic forms of the potential have been used for fits to the color-singlet potential in Ref. [18]. Let us examine two limits of this expression. First, for T → 0 both screening masses should vanish while the condensate characterized by m G remains finite [21]; one obtains [21] F a (r, which recovers the Cornell potential in the color-singlet channel and yields a universal string tension in all color channels, consistent with Refs. [30,33,34]. Second, for r → ∞ at finite T one has which is independent of a, consistent with lQCD data in Refs. [14,30]. When additionally taking the zerotemperature limit of F ∞ , it diverges since m G > 0 and m D → 0. This is, of course, expected in quenched QCD but needs to be amended in the presence of light quarks, to simulate "string breaking". We implement this by enforcing a flat potential above a string-breaking scale of about r ≃ 1.1 fm where the potential has reached about 1.1 GeV. We now also see that the cancellation between the leading piece of the second term and the constant third term in the parentheses of Eq. (7) in them D → 0 limit only works for all color channels if the string term is color-blind (i.e., has no Casimir scaling). If, on the other hand, both terms proportional to m 2 G are subject to Casimir scaling, it would imply that the r → ∞ limit sion, possibly because they cover smaller distances than those in Refs. [30,34]. In Ref. [30] potential problems with the computation of the octet free energy on the lattice below Tc have been pointed out. 3 Such a decomposition is not possible in functional parametrizations of the potential. (i.e., the single HQ selfenergy) picks up a strong dependence on the color orientation of the quark, which is not natural. Once the temperature dependence of the parameters m G , m D andm D , as well as of the strong coupling, α s , is determined through fits of the color-average free energy, corresponding expressions for the internal energy, U , can be computed via and projected into different color-channels, a. It is currently an unsettled question whether the free or internal energy (or a linear combination thereof) is a more suitable quantity to be utilized as a static two-body potential in a Schrödinger and/or scattering equation. We recall that the quantity, F (r, T ), computed in thermal lQCD, is the difference between the free energies of the thermal system containing a static QQ pair and the thermal system without the pair. In Ref. [38] it has been argued that the pertinent difference in internal energies, U (r, T ), recovers the thermal expectation value of the potential energy between the static Q andQ charges. This suggests U (r) as the appropriate in-medium two-body potential. Such a potential would by construction be a real quantity and thus a natural starting point to be unitarized in a scattering equation, generating appropriate on-shell cuts in the intermediate state through imaginary parts in the scattering amplitude. In Ref. [39] it has been argued that the relevance of F vs. U depends on the interplay of the thermal relaxation time in the heat bath and the interaction time of the Q andQ. If the former is much shorter than the latter, the QQ motion will be adiabatic and the free energy should be used; on the other hand, for very short interaction times (e.g., for a broad resonance or high-energy scattering), the internal energy should be more suitable. Another approach to the problem has been pursued by using effective field theo- ries at finite temperature [9,11], by combining HQ and perturbative hierarchies (HQ speed v ≪ 1 and gT ≪ T ). These studies recover the color-Coulomb part of the interaction and suggest that the free energy plays the role of a potential in a Schrödinger equation. In addition, an imaginary part of the effective potential has been identified [10,40]. We emphasize, however, that within a T -matrix approach, imaginary parts are generated via the unitarization procedure in the intermediate propagator and thus imaginary parts in the potential should be implemented via suitable cuts in a coupled-channel treatment. To account for the present uncertainty in the identification of an irreducible 2-body HQ potential we will show numerical results for both F and U as driving kernel in the T -matrix equation. More precisely, we utilize the subtracted quantity V a (r; T ) = X a (r, T )−X(r → ∞, T ) , X = F or U (11) to ensure convergence of the Fourier transform. These choices are believed to bracket the range of interaction strength in the HQ sector. Besides the potential, another important ingredient to the two-body scattering equation are the in-medium selfenergies of the heavy quark and antiquark, which we treat symmetrically as appropriate for a hot medium at vanishing baryon chemical potential. Starting from a bare quark of mass m 0 Q , we interpret the potential value at infinite distance, X(r → ∞, T ) in Eq. (11), as a temperature-dependent "mean-field" contribution to the HQ masses, i.e., as a real part of the selfenergy, This interpretation follows from the picture that at infinite distance the quarks have become independent from each other which is supported by lQCD results as discussed above. In addition, we will investigate the effects of imaginary parts of the HQ selfenergy, quantity has been estimated to be about ∼0.2 GeV in the T -matrix calculations of Ref. [6]. Let us now turn to the fit of the potentials to recent lQCD data for the color-average free energy. To obtain an indication of the systematic uncertainty underlying different lQCD inputs we will carry out all our calculations for N f =2+1-flavor QCD [12,36,41] and for N f =3-flavor QCD [13,37](the latter input has been used in the color-singlet channel in a previous T -matrix study [7]), which we refer to in the following as potential 1 and 2, respectively. The underlying (pseudo-) critical temperatures in these lQCD calculations have been quoted as T c =196 MeV (potential 1) and 190 MeV (potential 2). Fig. 1 shows the pertinent color-average free energies to-gether with our fits which have been performed down to temperatures of ca. 0.8 T c (not all are shown in the plot). The agreement with each data set is fair and supports the adequacy of the underlying model. The temperature dependence of the four fit parameters is displayed in Fig. 2. The variation of all parameters between the two potentials is rather small. The strong coupling constant, α s (T ), depends weakly on T . To suppress fluctuations in an unconstrained fit, we have for simplicity adopted a linear ansatz 4 . The screening masses exhibit an appreciable increase with temperature reminiscent of the linear T -dependence one expects from leading-order perturbation theory, m pert D = (1 + N f /6) 1/2 gT . Compared to the perturbative result, the coefficients in our fits are smaller for the Coulomb part (m D ) and larger for the "string" part (m D ), suggesting a stronger screening of the confining part (which primarily acts at larger distances). We have tried fits enforcing the condition m D =m D but could not obtain satisfactory accuracy without introducing unnaturally large variations of the parameters. This might support the assertion of differences in the screening process for the perturbative and nonperturbative components of the free energy. On the other hand, the variation of the glueball mass with T is weak. Recalling the explicit relation to the dimension-2 condensate [21] we find that a constant of about C 2 ≃ 0.8 GeV 2 above T c (see left panel of Fig. 3) is well compatible with our fit, as also found previously in Ref. [21] and in analyses of the gluon propagator, three-gluon vertex or quark propagator (see Ref. [22] and references therein). On the other hand, we have verified that the 20% decrease of C 2 (T ) across T c is a robust feature within "reasonable" variations of the other fit parameters; e.g., when imposing an overall T -independent value for C 2 , we could not reproduce the lQCD values of F ∞ av over the entire temperature range without "unnaturally" large variations in the other fit parameters. It is tempting to speculate that the 20% drop in C 2 (T ) across T c is related to a similar drop found in the magnetic-monopole density in SU (2) gluodynamics in Ref. [25], where qualitative arguments have been put forward that an A 2 µ condensate could be connected to the monopole density. The temperature window over which the variation of C 2 (T ) occurs basically conicides with where rapid changes in the infinite-distance values F ∞ and U ∞ are observed, cf. right panel of Fig. 3. In 4 Without this constraint, the fits tend to generate what we believe are artificially large variations in the parameters, mainly caused by varying ranges in r as covered by the lQCD data at different temperatures. This is particularly evident when fitting to the color-singlet free energy and removing some of the small-distance points. As a general guiding principle in the fits we tried to utilize redundancies in parameter choices to obtain smooth variations with T . the zero-temperature limit, assuming that both screening masses go to zero, a value of m G ≈ 1 GeV is needed to reproduce the vacuum string tension of √ σ = 0.465 GeV found in lQCD [36,37], in connection with strong couplings of α s =0.285 and 0.33 for potentials 1 and 2, respectively. All of these values are close to the fitted ones at the lowest temperature. In Fig. 4 we summarize the results for the vacuum potentials in the four different color channels which can be formed in 2-body QQ and QQ systems (recall that in vacuum the entropy-term vanishes and thus F a = U a ; also F av = F 1 from Eq. (6)). The color blindness of the string term produces a long-range attraction in all channels (which will support colored bound states in vacuum as discussed in Sec. IV A). The potentials emerging from the model-fit at finite T are collected in Fig. 5. One clearly recognizes the "melting" of the string term with increasing T . The singlet (meson) and antitriplet (diquark) potentials remain attractive at all distances. For the octet and sextet potential some residual attraction from the string term persist at lower temperatures (especially in U ), preserving a shallow dip structure for quark separations of around 0.1-0.2 fm. This behavior has also be seen on the lattice [14] and is obviously incompatible with a Casimir scaling of the string term. A. Reduction Scheme and Relativistic Corrections The above constructed in-medium potentials are now implemented into a thermodynamic T -matrix approach. The latter follows from a 3-dimensional (3D) reduction of the Bethe-Salpeter Equation in ladder approximation [42][43][44]. Heavy-quark systems are particularly suitable for this reduction as their energy transfer is parametrically suppressed compared to the momentum transfer. Even for heavy-light systems the on-shell condition on the heavy quark suppresses the energy transfer relative to the 3-momentum transfer. Note that a 4D treatment cannot improve the accuracy as long as the input is based on static potentials. However, relativistic corrections, as well as different reduction schemes, should and will be addressed below. The former are necessary to ensure consistency for relativistic energies (and, in fact, establish "minimal" Poincaré invariance of the potential approach [45] (see also [46])), while the latter give an indication of uncertainties inherent in the static approximation. The 3D integral equation for the T -matrix can be further simplified by applying a partial-wave decom-position which leads to the following 1D equation, for the T -matrix T l,a in a given color channel (a) and partial wave (l); n F is the Fermi-Dirac distribution, q = | q |, q ′ = | q ′ | and k = | k | denote the relative 3-momentum moduli of the initial, final and intermediate 2-particle state, respectively, and ω i (k) = (m 2 i + k 2 ) 1/2 the singlequark energies. Equation (15) encompasses both the heavy-light (1 = Q, 2 = q) and quarkonium (1, 2 = Q) channels where either particle can be a quark or an antiquark. The precise form of the two-particle propagator, G 12 , depends on the reduction scheme [47,48], for which we will investigate two well-established options, namely the Blancenbecler-Sugar (BbS) [42] and the Thompson (Th) [43] scheme, The main difference between both schemes is that the dependence on the total energy, E, is quadratic for the BbS propagator but linear for the Thompson version. The form of the propagators in Eqs. (16) further implies that both quarks remain good quasiparticles in the medium, i.e., their widths Γ Q,q are small compared to their mass. We use a minimal width of Γ q,Q = 20 MeV to facilitate numerical stability, unless otherwise stated. The incorporation of microscopic quark spectral functions will be carried out in an upcoming study. Once the potential is specified, the T -matrix equation (15) is solved using the algorithm of Haftel and Tabakin [49] as in previous works in our context [7,8]. It remains to specify how we implement the coordinatespace potential as extracted from lQCD in the previous section. We start by performing the Fourier transform and partial-wave expansion according to with the usual Legendre Polynomials P l . Since the string part of the potential (V S ) is primarily active at long distances, i.e., at low momenta and thus in the nonrelativistic regime, no further amendments are applied to it. However, for the Coulomb part (V C ), which dominates FIG. 6: One-gluon exchange diagram for quark-quark scattering; in the center of mass system, the relative 4-momentum in the incoming (outgoing) state is at small distances (and thus at relatively large momentum transfers), several corrections are in order. To infer relativistic effects, let us back up to the definition of the relativistic Coulomb potential given by the perturbative one-gluon exchange diagram in Fig. 6. Suppressing the color structure, the Born amplitude (potential) is given by where we have included a Debye mass as the leading temperature correction in the gluon propagator. In addition to the standard Yukawa propagator (which in the static approximation amounts to setting t → −( q − q ′ ) 2 in the center of mass system) we have a contribution from the contraction of the spinors with the vertex. At the level of the cross section (or amplitude squared) this part gives rise to the following factor (with the normalization u u = 1), For large s = E 2 the terms proportional t are subleading and can be dropped so that we can reformulate this expression as The factor in parenthesis is precisely the well-known Breit interaction in electrodynamics, corresponding to a magnetic current-current interaction of 2 moving charges [23,39,50], while the first factor "corrects" for relativistic kinematics. summation over spins and has to be taken out in a spinindependent definition of the potential). We therefore identify the following factors with which the nonrelativistic (off-shell) potential, V (q, q ′ ), should be augmented: ) . Note that B, R → 1 for q, q ′ ≪ m Q,q . For the string term, for which we assume a scalar interaction, the spinor contraction leads to Assuming again that we can drop the terms proportional t (relative to m Q,q ), no relativistic correction factor is mandated (this refines the procedure adopted in earlier works [7,8]). To check the impact of our corrections we compute the cross sections for one-gluon exchange ( Fig. 6) for quark-quark scattering using the Coulomb term in Born approximation including our correction factors, and compare it to the exact perturbative QCD (pQCD) results in the left panel of Fig. 7. One finds that the relativistic correction factors B and R are essential to establish consistency with pQCD at high energies; even at low energies, the agreement is no worse than 20%, which supports the approximation of neglecting t against s and m Q,q in the numerator of Eq. (19). The factors R and B provide a substantial improvement over not including them. Without the R factor, one obtains vanishing cross sections at high energy (only half of the correct magnitude without the Breit correction); even close to threshold the discrepancy to pQCD is larger than with the corrections. Also note that without the relativistic factors the cross section goes to zero for m q → 0, which becomes an uncontrolled feature in applications to heavylight scattering. In the right panel of Fig. 7 we compare a "pQCD" calculation assuming a scalar vertex structure to the T -matrix Born results with and without correction factors B and R. In this case, it is obviously mandated not to include these factors. As to be expected, the nonrelativistic approximation leads to the same result irrespective of whether one uses a vector or scalar interaction. Finally, we account for effects of the running coupling constant in the off-shell extrapolation of the potential. For on-shell kinematics, q = q ′ , such effects are effectively taken care of by our parametrization of the potential. For off-shell scattering, we simulate the running coupling by introducing a factor F run (q = q ′ ) < 1 as with ∆ = 1 GeV and Λ = 0.2 GeV. Putting all corrections together, the final form of the potential figuring in the T -matrix equation (15) is In vacuum the unscreened Coulomb potential exhibits a well-known infrared singularity. We tame this by introducing a small low-momentum cutoff; we have checked that varying this cutoff has a very small effect on the vacuum spectral functions. B. Quarkonium Correlators and HQ Diffusion The key quantity for computing observables is the onshell T -Matrix, T l,a (E; q, q), where E = ω 1 (q) + ω 2 (q) for both in-and outgoing channels. Following Ref. [7], the continuation below the 2-particle threshold, E thr = m 1 + m 2 , is carried out for vanishing 3-momentum, T l,a (E; 0, 0). The mesonic spectral function in a given quantum-number channel α is defined as where G denotes the correlation function, for which we will focus on the case of a heavy quark and antiquark (QQ) in the color-singlet channel (a=1) in a QGP of vanishing net baryon charge (µ q ). It can be written as where denotes the noninteracting contribution with the particle/antiparticle projectors The Dirac matrices Γ α ∈ {1, γ µ , γ 5 , γ µ γ 5 } characterize the scalar, vector, pseudoscalar and pseudovector channels, respectively (corresponding to χ c0 , J/ψ, η c and χ c1 in the charmonium sector). In the following we will neglect effects due to spin-orbit and spin-spin (hyperfine) interactions which is justified in the HQ limit. It implies degeneracy of the S-wave (l=0) states J/ψ and η c , as well as of the P -wave (l=1) χ c states (in the vacuum spectrum, this is realized within ∼0.1 GeV). Thus, we will compute only the vector (Γ α = γ µ ) and scalar (Γ α = 1) channels. The interacting contribution to the correlator in Eq. (29) is given in terms of the off-shell T -matrix as The coefficients a l result from the traces over the spinor structure. In line with the above approximation of neglecting spin-induced interactions, we use a 1/m Q expansion for these coefficients (which also leads to the degeneracy of pseudoscalar-vector and scalar-axialvector). To test or quarkonium spectral functions against lQCD correlators, computed with good accuracy in euclidean space-time, we need to calculate the euclidean-time correlator defined as The use of a constant width in the two-particle propagator, Eqs. (16), implies an non-vanishing value for σ α (E → 0). This induces an artificial singularity in the euclidean correlator since the temperature kernel, K, diverges in the zero-energy limit. This is an artifact of the quasiparticle approximation that can in principle be cured by employing a microscopic calculation of the single-particle selfenergies leading to the proper limit, σ α (E → 0) → E, for the retarded meson spectral function. We defer this study to future work and evade the singularity problem by imposing a cutoff below which we set the imaginary part of the propagator to zero, E cut = 2(8) GeV for charmonia (bottomonia); there is very little sensitivity to our correlator results when decreasing E cut by up to a factor of 2. An additional subtlety in the comparison of model spectral functions to lQCD euclidean correlators [51,52] is the presence of so-called zero-mode (ZM) contributions. These may be pictured as changing the time direction of a HQ line and thus represent HQ scattering processes including Landau damping (rather than QQ propagation). It turns out that the pseudoscalar quarkonium channel does not pick up the ZM contribution. To avoid extra uncertainties due to the latter, we therefore restrict our comparisons to euclidean lQCD correlators to this channel (recall that within our approximations the S-wave pseudoscalar (η c ) and vector (J/ψ) channels are degenerate). A common way to display the euclidean correlator at a given temperature uses a normalization to a so-called reconstructed correlator, which uses a baseline spectral function (e.g. the vacuum one) with the Kernel K at the same temperature as in numerator, The idea underlying this ratio is to exhibit the temperature effects on the in-medium spectral function, σ α (E, T ) relative to a baseline spectral function, σ α (E, T rec ), and thus to reduce the effects of systematic uncertainties (e.g. discretization effects in lQCD which distort the highenergy part of the spectral functions). As pointed out in Ref. [7] the spectral function used in the reconstructed correlator can have significant impact on the normalization and shape of R α (τ ; T ). Here, we always use our calculated vacuum spectral function as baseline, i.e., T rec =0. Let us finally elaborate on the diffusion properties of a single heavy (anti-) quark which we evaluate in terms of our heavy-light quark T -matrix. Following Ref. [53], one may approximate the Boltzmann equation for the HQ distribution function in the QGP by a Fokker-Planck equation and extract the pertinent thermal relaxation rate (inverse relaxation time) as with the friction coefficient The invariant amplitude squared, which is summed over color, angular-momentum, spin and light-flavor degrees of freedom (and averaged over the d c = 6 initial spincolor states of a heavy quark), is calculated in terms of S-and P -wave on-shell T -matrices as with center-of-mass (cm) energy and momentum The color degeneracy factors are given by In Eq. (38), the distribution functions, n F , include up (u), down (d) and strange (s) quarks in the thermal heat bath with incoming (outgoing) 3-momentum, q ( q ′ ). The 5 Due to the slightly different definition of the relativistic factors in our T -matrix compared to Ref. [6] the connection to the cross section is modified [54]. in-and outgoing HQ 3-momenta are p and p ′ . As an extension to previous work [6,55] we here distinguish explicitly between light-and strange-quark contributions (instead of using light quarks with an effective degeneracy of N f = 2.5). In accordance with HQ spin symmetry (as adopted in the quarkonium sector) we assume degeneracy of S-waves (e.g., pseudoscalar D and vector D * mesons) and of P -waves (e.g., scalar D 0 and axialvector D 1 mesons). In our numerical calculations below we also evaluate the contributions from HQ scattering off gluons. In this case, a potential picture is less straightforward. Therefore, as in previous work [6,55], we account for elastic HQ-gluon scattering via the leading order perturbative diagrams (including a Debye screening mass) with a rather large coupling constant of α s = 0.4. IV. SPECTRAL FUNCTIONS, CORRELATORS AND RELAXATION TIMES In this Section we first fix the remaining free parameters, i.e., the bare heavy-as well as light-and strangequark masses and check the resulting spectral functions against vacuum spectra of hidden and open heavy-flavor mesons (Sec. IV A). We then discuss our numerical results for quarkonium spectral functions and pertinent euclidean correlators with emphasis on the uncertainties originating from the choice of potential and reduction scheme (Sec. IV B). Finally, we apply our formalism to evaluate HQ thermalization times and diffusion coefficients (Sec. IV C). A. Vacuum Spectroscopy and Quark Masses Let us first focus on the vacuum spectra of charmonia and bottomonia to determine the bare masses of charm (c) and bottom (b), m 0 c,b , which figure into the expression for the effective mass, Eq. (12). We do this by requiring the S-wave charmonium (bottomonium) ground state to occur at the average mass of η c and J/ψ at ∼3.04 GeV, and at the Υ(1S) mass of ∼9.46 GeV (we neglect hyperfine splittings), see Fig. 8. Since the entropy term in the HQ free energy vanishes in the vacuum there is no difference between the free and internal energy. The resulting bare-quark masses are compiled in Tab. I for the two different potentials and reduction schemes. They generally fall into the range expected from the bare masses quoted by the particle data group [56] and are also consistent with previous T -matrix calculations [7]. The spread (in particular the relative one) is somewhat larger in the charm sector (ca. 140 MeV) than in the bottom sector (ca. 100 MeV), in line with the expectation that the 3-D reduction becomes more reliable with increasing mass. The mass gap between the ground and first excited charmonium state varies rather little between the two potentials within a given reduction scheme, δm ψ =0.65-0.68 GeV (BbS) and 0.54-0.56 GeV (Th). Compared to the experimental value of m ψ ′ − m J/ψ = 0.59 GeV, the Thompson scheme seems to be doing slightly better (the bare masses in the BbS scheme tend to be slightly high). The situation is opposite for the pseudoscalar mass splitting between η c and η c (2S), where the BbS scheme does slightly better (however, the effects of the hyperfine splitting are expected to be larger in the pseudoscalar than in the vector case). The Th scheme appears to perform somewhat better again for the χ c states, for which the BbS scheme overpredicts the spin-averaged mass by up to 0.1 GeV. From these observations we deduce an overall uncertainty of our T -matrix approach of 50-100 MeV in the charmonium sector, comparable to corrections one expects from hyperfine splittings. In the bottomonium sector (lower panels in Fig. 8), the mass gaps between the ground state Υ and the first exited state, as well as between the first and second exited state, are reproduced within 30 MeV (BbS) and 70 MeV (Th). The differences in the potentials have again only minor impact. A similar trend is found for the spinaveraged masses of the χ b states: using the BbS scheme our results tend to be higher in mass (especially for potential 2) compared to the experimental values for m χ b0 and m χ b2 , while for the Th scheme we typically obtain results 30 MeV below experiment. In addition, for both reduction schems and potentials, we obtain a χ b (3S) state right at the continuum threshold. Since there is no experimental evidence for this state, this could again be indicative for some over-binding. Recall, however, that we do not account for residual B-B interactions in our singlechannel treatment, which could have a significant impact on the spectral function especially close to threshold. As to be expected, the difference in the Coulomb term of the two vacuum potentials (different α s but equal string tension) induces larger deviations for the more deeply bound bottomonia, while the sensitivity to the reduction scheme (static approximation) is reduced. Overall, the accuracy of the predictions of our T -matrix approach is at the 10% level of the 1S-2S mass splittings. This is of the same order (or even below) the observed hyperfine splittings. This seems reasonable given that we have neglected both spin-spin and spin-orbit interactions at the quark level, as well as residual mesonic interactions in DD and BB channels. Finally, the values of light-and strange-quark mass have to be fixed. Since the physics of their effective vacuum masses is rather different than in the HQ sector (spontaneous chiral symmetry breaking vs. string breaking), we directly adjust the constituent masses. With m q = 0.4 GeV we obtain a S-wave D-meson mass of 2.01(2.02) GeV in the Th (BbS) scheme which coincides with the experimental value for the D * meson (but is larger than the average D-D * mass by ca. 60-70 MeV), see Fig. 9. It turns out that both smaller and larger m q lead to a larger D-meson mass: in the former case the increase in kinetic energy dominates, while in the latter case the increase in mass is more important. The result for the D-meson mass is roughly consistent with the string-breaking scale in the HQ potential, in the sense that the DD threshold (= twice the D-meson mass) approximately coincides with twice the the separation en-ergy of the QQ pair plus their bare masses, In this interpretation, the binding energy of the heavylight system should coincide with the constituent lightquark mass. This is roughly satisfied in the charm sector (m D is ∼3-10% larger than m c ) while the agreement improves in the bottom sector. Interesting effects are found in the non-singlet color channels (which will figure into our calculations of HQ transport in Sec. IV C below), cf. Fig. 9. In the color-antitriplet diquark channel, where the Coulomb term brings in half of the attraction as in the mesonic (color-singlet) channel, a bound state is observed at about m Qq ≃ 2.1 ± 0.05 GeV, corresponding to a binding energy of ca. 0.15 GeV. To construct a charmed baryon, one may imagine to add another light quark with an estimated binding of ∼0.25 GeV, in analogy to the D-meson. The resulting baryon mass would amount to ∼2.25 GeV, not far from the empirical Λ c mass of 2.29 GeV. The color-Coulomb is repulsive in the sextet and octet channels, implying that the states at around ∼2.2 GeV are entirely due to the confining force. It is tempting to speculate that the binding of an octet and anti-octet (or sextet and anti-sextet), with a binding energy comparable to the ground-state charmonium, ∼0.6 GeV, could be a relevant configuration underlying the recently discovered X, Y and Z states in the 3.8-4.5 GeV mass region. The small widths of these states would be naturally explained due to their predominantly color non-singlet building blocks, see also Refs. [57][58][59]. If this picture is correct, one predicts further regimes of rich spectroscopy for narrow "exotic" 4-quark states around masses of 6 GeV (2c2c), 10 GeV (bbqq, 2b2q, 2b2q) and 20 GeV (2b2b). The empirical heavy-strange mesons, D s and D * s , are ca. 100 MeV heavier than the non-strange states (D and D * ). We can reproduce this splitting by choosing a constituent strange-quark mass of m s = 0.55 GeV, consistent with typical values in constituent quark models. Other properties of the cs states are quite similar to our findings for the cq states and will not be reiterated here. This also applies to the open-bottom bq and bs states. B. Quarkonium Spectral and Correlation Functions in the Quark-Gluon-Plasma With all parameters fixed and in-medium potentials determined we now proceed to compute the spectral functions of heavy quarkonia in the QGP. These can be tested by comparing the pertinent euclidean correlator ratios, Eq. (36), to computations of this quantity on the lattice. Recent results by Jakovác et al. [60] in quenched QCD and by Aarts et al. [61] for N f = 2 show small variations of about 10% of the correlator ratios for charmonia up to temperatures of about 2 T c , and even less for bottomonia. Such a behavior could be semi-quantitatively reproduced in several potential model approaches [7,16,17,63]. However, no systematic assessment of relativistic corrections has been performed in these works. We limit our in-medium investigations to the temperature regime T ≥ 1.2 T c ; closer to T c , the infnitedistance limit of the internal energy, U ∞ (T ), exhibits a rapid increase which is presumably associated with the onset of phase-transition dynamics. We do not expect pertinent effects to be properly accounted for in our current single-channel (QQ) implementation of the Tmatrix. E.g., close to a second-order phase transition, long-range many-body correlations become important, as well as new degrees of freedom such as DD channels, which are not included here. In the following, we divide the presentation into the charmonium (Sec. IV B 1) and bottomonium (Sec. IV B 2) sectors, followed by a combined evaluation (Sec. IV B 3). Charmonium We begin our in-medium analysis for the S-and Pwave channels in the charmonium sector using a small "numerical width" of 20 MeV for the c andc quarks (recall the degeneracy of pseudoscalar-vector as well as of scalar-axialvector states). Contrary to the vacuum, we now distinguish 2 scenarios depending on whether the free (F ) or internal (U ) energy is identified as the static finite-temperature potential. The results are compiled in Figs. 10+11 for U and in Figs. 12+13 for F as potential. Let us first focus on the former case, V (r; T ) = U (r; T ) − U ∞ (T ). At the level of the in-medium spectral functions both lQCD inputs and reduction schemes share a number of generic trends, all of which were already present in the T -matrix calculations of Ref. [7]. The S-wave ground state (η c , J/ψ) survives as a bound state up to temperatures of about 2-2.5 T c around which it merges into the cc continuum. But even at temperatures as low as 1.2 T c the medium effects in the potential induce a reduction of the binding energy, E B = 2m c −m ψ by about a factor of ∼2, to E B ≃ 0.3-0.4 GeV compared to 0.6-0.8 GeV in the vacuum (for Th and BbS, respectively). The effective quark mass at this temperature is approximately the same as in vacuum, causing a net increase in the mass of the S-wave ground state to m ψ (1.2T c ) ≃ 3.3-3.4 GeV. For higher temperatures, the binding further decreases but this effect is overcompensated by a reduction in the effective charm-quark mass (i.e., in U ∞ (T )/2), so that the mass of the state actually decreases. Along with the decrease in binding goes a reduction in the strength of the state (= peak height of the spectral function at fixed width). The rather subtle differences in the spectral functions become more apparent in the euclidean correlator ratios, especially between the two reduction schemes (within a given reduction scheme, the two different potentials lead to small variations also for the correlator ratios). For the BbS scheme, the ratios deviate by up to 30-40% from one for temperatures of 1.2-2 T c . This is too large compared to the 10-15% reduction that has been found in lQCD computations [60][61][62]. However, employing the Thompson scheme, the correlator ratios are within 15% from one, which is better in line with lQCD. The technical reason for the difference in the correlator ratios between BbS and Thompson scheme can be traced back to the larger binding that the BbS scheme generates already in the vacuum. This requires relatively large bare charm-quark masses (recall Tab. I) which in the medium ultimately lead to too large a ground-state mass (or continuum threshold) when the latter approaches its dissolution (note that in the BbS scheme the J/ψ (or η c ) mass at 2 T c is still significantly above its vacuum mass, while in the Th scheme it has dropped below the vacuum value). For the ground-state P -wave state (χ c ) we also find that, right above T c , it is heavier than in vacuum due to the decrease in binding. However, due to its relatively the small binding energy (in vacuum E B ≃ 0.22-0.25 GeV and 0.3-0.35 GeV in the Th-and BbS scheme, respectively) it dissolves just above ∼1.2 T c where it merges into the cc continuum. Next we discuss the in-medium charmonium results when using F as potential, V (r; T ) = F (r; T ) − F ∞ (T ), summarized in Figs. 12 and 13. Compared to the use of U , the in-medium binding is appreciably reduced (recall Fig. 5). For example, the binding energy of the S-wave ground state at 1.2 T c is reduced by about one order of magnitude, and the state dissolves shortly thereafter, at ∼1.3 T c (Fig. 12). The P -wave states have disappeared already just above T c . At the same time the value of the potential at infinity provides a smaller selfenergy correction (see Fig. 3) leading to a smaller effective quark mass and, consequently, a lowered continuum threshold compared to using U . This, in particular, entails no or only a small rise in the in-medium mass of the J/ψ above T c . For the BbS scheme the drop in effective mass and the reduction in binding nearly compensate each other leading to a stable J/ψ mass until dissolution. For the Th scheme the smaller bare-quark mass even leads to a net decrease of the in-medium J/ψ mass. The euclidean correlator ratios are again very similar for the different potentials but exhibit a significantly different τ dependence for the two reduction schemes. For the BbS scheme the deviation relative to the vacuum correlator is up to ∼50% while for the Th scheme it is no more than 10%. However, for both schemes the temperature evolution is remarkably stable, with variations of no more than 15% even in the BbS scheme. Thus the rather large overall deviation originates from the reconstructed (vacuum) correlator, where the problem can be traced back to the large bare-quark mass which is required in this scheme due to the large vacuum binding energy. To further map out uncertainties we consider the influence of a quark width on the correlator ratios. In Refs. [6,8] it has been found that the charm-quark width above T c may be as large as 0.2 GeV. We injected into Eqs. (16) a value of Γ Q = 0.1 GeV for the HQ width and plot the results, using U as potential, in Fig. 14. As an immediate consequence, the J/Ψ width turns out to be about twice the single-quark quark width, as to be expected. The "dissociation" temperature (loosely defined as the temperature where the peak height is reduced to less than twice the height of the continuum) decreases significantly compared to the narrow-width approximation, to about 1.7 T c : the broadening of the resonance structure simply accelerates the merging with the continuum part. The peak positions (masses) of the narrow-width calculation are basically preserved. The correlator ratios are increased compared to the calculation with small quark widths. The magnitude of the effect is relatively small for the Th scheme where the spread was already small before. For the BbS scheme the increase is more significantly: the up to 40% spread in the narrow-width calculation is reduced to within 30%. Similar systematics are also found when using F as potential. Fig. 11 but using the free energy (F ) as potential. Bottomonium In analogy to the charmonium calculations we supply a small "numerical width" of 20 MeV for the b andb quarks. The in-medium bottomonium spectral functions and correlator ratios are compiled in Figs. 15+16 for U and in Figs. 17+18 for F as potential. For the U -potential, similar to charmonium, the reduction in binding combined with a large effective quark mass (similar as in vacuum) leads to an increase in the mass of all bottomonium states right above T c . Within the BbS scheme the mass of the lowest Υ bound state varies by less than 100 MeV over the considered temperature range of 1.2-2 T c : the lowering of the bb threshold and the loss in binding nearly compensate. But even at 2 T c a well-defined Υ(1S) bound state persists. The Υ(2S) survives up to a temperature of about 1.7 T c and shows a much larger variation in mass (about 0.5 GeV) while the Υ(3S) basically dissolves at T c . In the Th scheme we observe a similar pattern. For the euclidean correlator ratios the calculations within the BbS scheme deviate from one by 20-25%, more than seen on the lattice. However, the relative temperature variations are smaller, ca. 10-15%. In the Th scheme the temperature variations are further reduced to less than 10%, and also the deviations from one are smaller, which is better in line with the essentially constant lQCD correlator ratios close to one. Further stabilization of our results is conceivable with more realistic in-medium widths and/or improvements in the connection between vacuum and in-medium potentials. For the P -wave χ b states the ground state melts at about 1.7 T c while the first excited state dissolves at about 1.2 T c . When using F as potential the reduction in binding is again more pronounced, with a dissolution of all excited S-wave Υ's and all χ b states right at T c . Only the Υ ground state survives until somewhat above 2 T c . Compared to the calculation with U as potential the strength of the state at 2 T c is reduced by a factor of ∼3, indicating the lower binding, while its mass is about 200 MeV smaller (the loss in HQ mass overcompensates the loss in binding energy). Also here the temperature dependence of the ground-state mass is rather stable. As before, the euclidean-correlator ratios are rather sensitive to the interplay of HQ mass, quarkonium binding and the "polestrength" of the states. The BbS scheme again shows appreciable deviations from one for both potentials, up to 40%, while for the Th scheme these are 10-15%. However, the spread in the temperature dependence is less than 10% for both reduction schemes and lattice inputs. We have verified that the inclusion of larger HQ widths has effects similar as in the charmo- Discussion of Quarkonium Results Let us try to summarize and evaluate the findings in the quarkonium sector. Within the Th scheme, all Swave correlator ratios (for both lQCD inputs, for U and F , as well as for charmonium and bottomonium) are within ca. 15% of one, for all temperatures between 1.2 and 2 T c . For a given calculation (scenario) the relative deviations within this temperature range are, in most cases, even smaller, suggesting that the reconstructed correlators play a non-negligible role in the absolute uncertainty (e.g., "residual" hadronic interactions between D andD states in the continuum are not accounted for in a single-channel T -matrix as employed here). Within the BbS scheme, we generally find larger deviations of the correlator ratios from one (by up to ∼50%); within a given scenario, the temperature variations are significantly smaller, up to 30% (or even less especially for the free energy). While this may overestimate the uncertainty associated with the 4D→3D reduction scheme (recall that the BbS scheme has a tendency for over-binding, even in vacuum, see also the discussion in Appendix A), it stipulates that the static approximation (especially for charmonia) requires further scrutiny if one aims at an absolute accuracy at the 10% level (applications based on the (nonrelativistic) Schrödinger equation are expected to be beset with larger uncertainty). We also corroborated indications found in Ref. [7] that effects of a finite spectral width are not negligible either, increasing correlator ratios at the 5-10% level. Our schematic implementation of the in-medium widths has only scratched the surface of a full many-body calculation utilizing microscopic single-quark spectral functions in the T -matrix equation (see. e.g., Ref. [64] for a recent perturbative calculation of the HQ spectral function in the QGP). Our analysis corroborates indications from earlier studies [7,16,17,63] that there is currently no decisive discrimination power between the different scenarios realized by the use of U ("strong binding") and F ("weak binding"). When employing U the mechanism underlying a constant (or temperature-stable) correlator ratio is rather involved, being a combination of 4 components: On the one hand, the binding energies close to (but above) T c are rather large (several 100 MeV), together with a large polestrength (due to the steepness of the U -potential at intermediate distances). On the other hand, the effective HQ mass, governed by U ∞ (T ), and thus the QQ threshold energy, are also large (basically as in vacuum). With increasing temperature, the binding and the polestrength drop, as do the HQ mass and contin- ing F , the binding already vanishes just above T c , and the balance in the spectral function upon increasing T is between a further loss of strength in the threshold state (cusp) and a reduction in the HQ threshold. In particular, with the F -potential one does not encounter a regime above T c with a large variation in binding energy. However, going further down in temperature, such as regime must inevitably occur when approaching the vacuum limit, and similar "complications" as in the Upotential calculations above T c are to be expected. Thus, a sensitive test of whether the F -potential can be consistent with lQCD correlators is in the temperature regime where the largest variation in binding occurs (which is apparently at or below T c ). C. Heavy-Quark Diffusion in the Quark-Gluon Plasma Following the analysis of in-medium quarkonia we now turn to evaluating heavy-flavor transport in the QGP. In the vacuum we have found that the low-lying D-meson spectrum is reasonably well reproduced, but also that shallow bound states might occur in colored heavy-light two-body channels (recall Fig. 9). The calculation of the heavy-light T -matrix in the QGP requires an additional input in terms of the in-medium light quark masses (recall that the in-medium HQ selfenergy is determined by the infinite-distance of the free/internal energy according to Eq. (12)). Due to chiral symmetry restoration, the vacuum constituent-quark mass is expected to approach zero; however, the light quarks and gluons most likely acquire (chirally symmetric) thermal masses. We approximate these by adopting the functional form expected from perturbative QCD [67], When implemented into a quasiparticle (QP) description of the QGP this form allows to recover an energy density, ǫ QP , which is roughly 10-20% below the perturbative value, independent of temperature [68,69]. We fix the strong coupling in Eq. (43) at g = 2.3, resulting in ǫ QP /ǫ SB ≃ 0.83, consistent with recent lQCD calcula- tions [70] for T ≥ 1.4 T c . This value for g is also compatible with our perturbative calculations for scattering off thermal partons (α s ≃0.4). In Fig. 19 we compile in-medium charm-light Tmatrices using U for potential-1 within the Th scheme, with an in-medium single-quark width of 100 MeV (uncertainties due to reduction scheme and potential are exhibited in the context of the thermal relaxation rates below). In the medium the color-sextet and -octet correlations fade rapidly due to screening of the attractive string part of potential (recall Fig. 5). The meson (colorsinglet) and diquark (color anti-triplet) channels feature broad "Feshbach resonances" (i.e., resonances at threshold) up to ∼1.5 T c . Compared to previous T -matrix results [6], the diquark state is slightly more robust, due to the refined (color-blind) treatment of the string term (e.g., the ratio of peak heights for color anti-triplet to color singlet at 1.2 T c is about twice as large). For the charm-strange correlations similar patterns are found. Next we calculate HQ relaxation rates. The original suggestion of nonperturbative effects in HQ diffusion has been put forward in Ref. [55] using an effective resonance model where the masses and coupling strengths were free parameters; within reasonable ranges of these, a factor 2-4 shorter thermalization times compared to pQCD were found. Subsequently, heavy-light T -Matrix calculations [6] were carried out to render the schematic estimates more quantitative (and to check for the existence of D-meson resonances in the QGP), roughly confirming the results of the resonance model if the Upotential is employed. Here, we elaborate for the first time a quantitative connection to in-medium quarkonium properties. With the potential and all other parameters determined our relaxation rates, A(p), are predictions of the approach. They are calculated utilizing Eq. (38) and displayed in Figs. 20 and 21 as a function of the HQ momentum for several temperatures above T c . For completeness, we have added to the T -matrix results the contribution from HQ scattering off gluons using LO pQCD diagrams (including Debye-screening) with a coupling constant α s =0.4. At the lowest temperature, T = 1.2 T c , we find γ c = 0.14 − 0.2 fm −1 , where most of the variation is due to the potential choice while the reduction schemes agree within 10% for a given potential (pQCD scattering off gluons contributes ca. 0.025 fm −1 ). Thus, in the scattering regime the dependence on the reduction scheme is less pronounced than for bound states (see also Appendix A). The relaxation rate increases to 0.25-0.33 fm −1 at 2 T c (again with most of the spread owing to the difference in the potentials; pQCD scattering off gluons contributes ca. 0.07 fm −1 ). The magnitude of the low-momentum relaxation rates at T = 1.2 T c (2 T c ) is a factor 4-5 (2.5-3.5) larger than for a LO pQCD calculation for scattering off thermal quarks, antiquarks and gluons with α s =0.4. They are slightly larger than the previous T -matrix results of Ref. [6], where γ c =0.12-0.19 fm −1 has been obtained over the temperature range T = 1.1-1.8 T c for parametrizations of yet two other (quenched [19,65] and N f =2 [39,66]) lQCD-based internal energies. In the previous calculations [6] a constant charm-quark mass of m c = 1.5 GeV was used while we here include the in-medium selfenergy from the infinitedistance limit of the internal (or free) energy. When using U , m * c is larger than 1.5 GeV up to temperatures of ca. 1.9 T c (potential-1 with Th scheme, see Tab. I and right panel of Fig. 3). The extra interaction strength in our present calculation compared to Ref. [6] is mostly due to the color-blind treatment of the string term, particularly in the diquark channel. We emphasize that the inmedium HQ masses as used here are mandatory to maintain consistency with the quarkonium correlator ratios where they play a critical role in balancing the changes in binding energy. Our investigations actually show that the internal energy based on the quenched lQCD input from Refs. [19,65] leads to euclidean correlator ratios for quarkonia which exhibit a large temperature variation (decrease with increasing T ) incompatible with lQCD results, i.e., well beyond the 30% error margin deduced in Sec. IV B 1. The large temperature variation (screening) in the underlying potential leads to a decrease of the thermalization rate with temperature. This feature is not confirmed in the more quantitative calculations presented here. However, the increase with temperature of γ c for our T -matrix (plus pQCD gluon scattering) calculations is significantly slower than for the LO pQCD calculations with temperature-dependent Debye mass: for T = 1.2 → 2T c the former increase by a factor of ∼1.7 (less for the T -matrix contribution alone), compared to a factor of ∼2.5 for LO pQCD only (light anti-/quarks and gluons). Furthermore, in the T -matrix calculations A(p) decreases appreciably with increasing 3-momentum while the LO pQCD results are almost constant. This is simply due to the fact that with increasing 3-momentum the charm quark is less likely to excite a low-energy Feshbach resonance in collisions with thermal quarks or antiquarks. At high 3-momentum, resummation effects in the T -matrix cease and the relaxation rates come closer to the LO pQCD results (recall the importance of the proper relativistic factors for this behavior). The difference at high 3-momentum is mostly due to the smaller value of the screening mass of the Coulomb term in our lQCD fit relative to the pQCD value, m pQCD D = 1 + N c /6 gT . As in Ref. [6], the dominant contribution to the HQ relaxation rate originates from the S-wave meson (colorsinglet) meson and diquark (color-triplet) channels, while the octet and sextet channels are suppressed (even at 1.2 T c ), as is immediately inferred from the magnitudes of the corresponding T -matrices in Fig. 19. The P -wave channels contribute about 30% of the S-waves. When using F instead of U as potential the lowmomentum charm-quark relaxation rate is reduced by approximately a factor of ∼2, but still larger by a factor of ∼2 than the LO pQCD results, cf. Fig. 21. Consequently, they come closer to the LO pQCD results at high momentum, even though a significant enhancement persists even at p=5 GeV (mostly due to the differences in screening mass as mentioned above). To put our results in context with other approaches we display in Fig. 22 (left panel) the temperature dependence of the relaxation rate at zero momentum for different models. Specifically, we compare our results for U and F to LO pQCD, to earlier T -Matrix calculations [6] and to estimates from gravity-gauge duality (AdS/CFT) [73,74] (see also Refs. [71,72] for LO calculations with running coupling). The uncertainty bands associated with our T -matrix calculations are largely governed by the differences in the underlying lQCD input. As discussed above, the results using U overlap with the earlier T -Matrix calculations (where also U has been used as potential), especially when the latter would be calculated with a color-blind string term. When using F the results are closer to, but still significantly above, LO pQCD. The AdS/CFT rates are markedly larger than any of the T -matrix rates, except for extrapolations close to T c . In the right panel of Fig. 22 we compile the temperature dependence of the spatial diffusion coefficients, for the above discussed approaches. We plot D s in units of the thermal wave length of the medium, 1/(2πT ), which renders it suggestive for a connection to the widely discussed ratio of viscosity to entropy-density. E.g., in kinetic theory for a weakly interacting gas one has the approximate relation In the strongly coupled limit of the AdS/CFT correspondence, one finds the same parametric dependence, albeit with a different numerical coefficient (the conjectured lower bound of η/s = 1/4π corresponds to to diffusion at the thermal wavelength, D s ≃ 1/(2π T )). Eq. (44)) and almost constant for LO pQCD and the Tmatrix approach with F as potential, decreasing by less than 5% and up to 30%, respectively, for T = 2 → 1.2 T c . The variation is larger, ca. 50%, if U is used as potential. The largest variation of more than 50% is found with the quenched lQCD input [19,65] for U in the previous T -matrix calculations, but, as we indicated above, this Tdependence is incompatible with the small temperature variation in the euclidean quarkonium correlator ratios. Nevertheless, our current, better constrained T -matrix calculations support a decreasing trend when approaching the "critical" temperature from above, as typical for many substances at or in the vicinity of a second-order transition. In Figs. 23 and 24 we display the relaxation rates for bottom quarks for the U -and F -potential, respectively. The general trends (and quantitative enhancements over LO pQCD) are very similar to the charm case so that an analogous discussion applies which we do not reiterate here. V. SUMMARY AND CONCLUSIONS We have set up a common framework to evaluate properties of open and hidden heavy-flavor states in the QGP. A thermodynamic T -matrix formalism for heavy quarkonia and heavy-light quark interactions has been combined with input potentials estimated from heavy-quark free energies computed in lattice QCD. Compared to earlier calculations, we have refined this link by utilizing a fieldtheoretic ansatz for an effective in-medium gluon propagator. This enabled the fits to be carried out at the level of the color-average free energy while disentangling color-Coulomb and confining interactions and thus gain insights into their medium modifications via the temperature dependence of the associated fit parameters (screening masses and coupling strengths). The T -matrix calculations further allowed us to identify appropriate relativistic corrections to the static potential, including differences between vector and scalar interactions for the color-Coulomb and confining parts, respectively. E.g., a color-Breit correction naturally emerges for the Coulomb term. The relativistic corrections are crucial to establish quantitative consistency for high-energy scattering between perturbative QCD and the T -matrix in Born approximation. This connection is a prerequisite for a simultaneous treatment of bound and scattering states, which was one of the main objectives of our work. The bare masses of the charm and bottom quark have been fixed to the (spin-averaged) mass of the quarkonium ground states, η c -J/ψ and Υ, in vacuum. The resulting mass splittings for the excited states agree with the experimental values within ca. ±10%, which is smaller than the effects due to hyperfine interactions which have been neglected in this work. The largest source of uncertainty turned out to be the static reduction scheme underlying the scattering equation, while the 2 considered lattice potentials induced smaller variations. We also verified that the vacuum D-and B-meson states are reasonably well recovered when using typical values for the constituent light-and strange-quark mass. As a byproduct, we found that the scalar treatment of the confining force leads to shallow bound states in the color-sextet and -octet channels in vacuum, which might be relevant for a rather rich spectroscopy of narrow four-quark states as discussed in the recent literature. Our finite-temperature calculations have been carried out within two scenarios of adopting an in-medium potential from the lattice results, either the free (F ) or internal (U ) energy. First, we calculated spectral functions and pertinent euclidean-correlator ratios for heavy quarkonia. We confirmed the earlier found trend that for F charmonia dissolve rather close to T c (T diss ≃1.2T c ) while for U the J/ψ may survive up to 2-2.5 T c . However, both scenarios can lead to almost constant correlator ratios, and thus to agreement with lattice QCD results for this quantity. The reason is a small in-medium HQ mass correction when using F , while it is larger for U . As in the vacuum, we found significant variations due to the static reduction scheme, reflected by deviations of up to ∼40% in the correlator ratios at a given temperature. However, within a given reduction scheme, potential choice and lattice input, the relative temperature variation of the correlator ratios is usually much smaller. This suggests that future studies should scrutinize corrections to the static approximation, but also the role of the reconstructed (vacuum) correlator figuring into the denominator of the ratios, especially close to threshold where hadronic (DD) correlations could become important. For heavy-flavor transport in the QGP, the use of U leads to a factor of ∼2 smaller thermalization times and (spatial) diffusion constant compared to F . This is largely due to "Feshbach"-type resonances in meson and diquark channels up to 1.3-1.5 T c , but nonperturbative rescattering strength persists in the heavy-light T -matrix for temperatures beyond 2 T c . Even when using F as potentail, these effects lead to up to a factor of 2 faster thermalization compared to perturbative scattering. The uncertainty due to the reduction scheme is smaller for heavy-quark transport coefficients than for quarkonium correlator ratios. The screening effects in the interaction generate a significant increase of the spatial diffusion constant (in units of the thermal wavelength) with temperature (especially for U ), suggestive for a minimum toward T c . Our analyses suggest that a thermodynamic T -matrix approach can be used to establish quantitative relations between quarkonium survival and heavy-quark transport in the QGP. In particular, we have assessed uncertainties associated with commonly applied static (potential) and nonrelativistic approximations. While the latter are mandatory in the scattering regime, the former turned out to be on the few-tens of percent level, which is relatively large for the lattice correlator ratios, but relatively small in the context of current estimates for heavy-quark diffusion coefficients. A pressing issue remains the additional uncertainty in the definition of a finite-temperature potential, especially when based on model-independent input from thermal lattice QCD. Several directions for future investigations emerge from our studies. As already mentioned, retardation effects and the influence of virtual anti-particle contributions need to be addressed, especially in the bound-state regime, e.g., by replacing the T -matrix by a Dyson-Schwinger formalism at finite temperature. Such studies could also facilitate the treatment of heavy-quark interactions with thermal gluons beyond the perturbative level. A more microscopic treatment of the heavy-quark width figuring into the 2-particle propagator of the scattering equation is desirable and in principle straightforward. Additional finite-width effects arise via inelastic interaction channels, which can be implemented via coupled channels into the T -matrix equation. For example, gluon radiation is expected to become important for highenergy charm-quark scattering and/or quarkonium dissolution, while DD or even magnetic charge-anticharge states could improve the description around T c and extend it to temperatures below T c . Heavy-quark susceptibilities, or more generally correlators of charm quarks with conserved charges (e.g., baryon or strangeness), which are computed with good accuracy in thermal lattice QCD, can be calculated with our T -matrix. Here, the presence of broad resonances does not necessarily imply large signals in such quantities. Finally, the in-medium quarkonium and heavy-quark transport properties should be implemented into a comprehensive phenomenological analysis of pertinent observables in heavy-ion collisions, e.g., via rate equations and/or Langevin simulations in a realistic bulk medium evolution. This will provide quantitative tests of the equilibrium results in current and future experiments and thus advance our understanding of strongly coupled QCD matter at temperatures around and above T c . Work along some of these lines has been initiated. PHY-0449489 (CAREER) and PHY-0969394, and by the Alexander-von-Humboldt Foundation. Appendix A: Differences in the reduction scheme In this appendix we discuss differences between the BbS and Th reduction scheme. For this purpose we concentrate on the case of heavy quarkonium bound states. From Eqs. (16) it follows that the BbS and Th 2-particle propagators differ as This condition is rather well satisfied in the scattering region, i.e., above the 2-particle threshold, where the integral is dominated by the pole (unitarity cut) of the propagator, E −2 ω Q (k) ≈ 0, which implies E ≈ 2 ω Q (k). However, in the bound-state regime, i.e., below threshold, the situation can be different. For example, in the extreme case of E → 0 the difference between the propagators becomes as large as a factor of 2, entailing large discrepancies in the results for the T -Matrix. Let us try to asses the differences more quantitatively for the case at hand, i.e., for the binding energy of the charmonium ground state in vacuum. Our results for the BbS and Th scheme show a ca. 25% difference in the J/Ψ binding energy (the explicit values are quoted in the legend of Fig. 26). As a rough guideline, the influence of G on the binding may be estimated by formally writing the solution of the T -matrix as T = V /(1 − GV ). At the boundstate energy, one has GV = 1, and thus a 25% change in G approximately "mimics" a 25% stronger potential, or binding energy (for the same static input potential). Thus, for the BbS propagator in the T -matrix integral one should expect, on average, To estimate the relevance of the integration momenta we apply a cutoff, λ, in the T -matrix equation (15) and study the dependence of f (λ) and the J/ψ binding energy on this cutoff, as displayed in Fig. 25. Taking as an approximative representative momentum the one by which half of the binding is built up (λ ≃ 1 GeV) and evaluating the "BbS factor" at this value, one finds f (λ) ≃ 1.2. The magnitude of the deviations between BbS and Th for bound states can thus be roughly accounted for and is expected to become larger with increasing ratio of binding energy to the mass of the constituents. Appendix B: Binding energies In this appendix we compile the temperature dependence of the binding energies of the J/ψ and Υ ground states. We define the binding energy as the difference between the quark-antiquark threshold, 2m Q , and the mass of the state in question. In the vacuum the J/ψ (η c ) binding energy is about 0.65(0.85) GeV for the Th (BbS) scheme. The in-medium binding energies are shown in Fig. 26 when using U as potential (for F the state already dissolves at about 1.3 T c ). One observes that the BbS scheme leads to a steeper dependence of the binding on temperature compared to the Th scheme while the melting temperature is quite similar in both cases. A similar pattern occurs for the Υ ground state, displayed in the left and right panel of Fig. 27 when using U and F , respectively. Note, however, that the scheme dependence of the binding energy is significantly reduced in the bottomonium case, to about 10%, reflecting a better accuracy of the static approximation due to the larger bottom-quark mass, as expected. With the weaker interaction implicit in F the binding is reduced by about a factor of 4. The uncertainty induced by the different lQCD inputs is significantly larger than the one caused by the reduction scheme. we repeat our calculations with the string term, V S , switched off while all other parameters are kept fixed. The corresponding results for the internal energy, U , are obtained using Eq. (10) with the modified free energy (we keep, however, the self-energy from the full calculation). The results using U as the potential are presented in Fig. 28 using the Thompson reduction. For J/ψ (η c ) states the most striking difference occurs in the vacuum where without the confining interaction no excited bound states are supported and only a modest threshold enhancement remains for the ground state. In the medium the relevance of the string term gradually decreases until the results become similar to the full calculation for a temperature close to 2 T c (even though the peak height is still smaller). This follows from the significantly stronger screening of the confining relative to the Coulomb term (m D (T ) is much larger than m D (T ) above T c , recall Fig. 2). Close to T c half of the binding of the J/ψ is still supplied by remnants of the confining force. These systematics suggest that charmonia are rather sensitive to medium effects on the confining force in the temperature regime of 1-2 T c . At first glance it might surprise that the calculation without string term produces more binding in the medium than in the vacuum. The reason is that, without the string term, the internal energy, as given by Eq. (10), leads to a more attractive potential in the medium (up to ∼1.5 T c ) than in the vacuum (note that we are still using the large effective mass which, of course, is generated by the large-distance limit of the string term). For the more tightly bound bottomonia (Υ) the sensitivity to the string term is still appreciable. In the vacuum the ground state is only bound by ca. 100 MeV while the excited states are unbound. In the medium a similar trend as in the charmonium sector is observed, in that the significance of the string term ceases as temperature increases. Generally, our findings clearly demonstrate the importance of the confining interaction in both charmonium and bottomonium spectroscopy, both in vacuum and in medium for temperatures of up to ca. 2 T c . The use of potentials developed in a perturbative expansion therefore omits important physics in the description of quarkonium melting in medium.
2010-05-05T14:56:45.000Z
2010-05-05T00:00:00.000
{ "year": 2010, "sha1": "b7c4bb2efedfc16f0dad6a9b6189513f740b54bd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1005.0769", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b7c4bb2efedfc16f0dad6a9b6189513f740b54bd", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11282129
pes2o/s2orc
v3-fos-license
Modelling the Contributions of Malaria, HIV, Malnutrition and Rainfall to the Decline in Paediatric Invasive Non-typhoidal Salmonella Disease in Malawi Introduction Nontyphoidal Salmonellae (NTS) are responsible for a huge burden of bloodstream infection in Sub-Saharan African children. Recent reports of a decline in invasive NTS (iNTS) disease from Kenya and The Gambia have emphasised an association with malaria control. Following a similar decline in iNTS disease in Malawi, we have used 9 years of continuous longitudinal data to model the interrelationships between iNTS disease, malaria, HIV and malnutrition. Methods Trends in monthly numbers of childhood iNTS disease presenting at Queen’s Hospital, Blantyre, Malawi from 2002 to 2010 were reviewed in the context of longitudinal monthly data describing malaria slide-positivity among paediatric febrile admissions, paediatric HIV prevalence, nutritional rehabilitation unit admissions and monthly rainfall over the same 9 years, using structural equation models (SEM). Results Analysis of 3,105 iNTS episodes identified from 49,093 blood cultures, showed an 11.8% annual decline in iNTS (p < 0.001). SEM analysis produced a stable model with good fit, revealing direct and statistically significant seasonal effects of malaria and malnutrition on the prevalence of iNTS disease. When these data were smoothed to eliminate seasonal cyclic changes, these associations remained strong and there were additional significant effects of HIV prevalence. Conclusions These data suggest that the overall decline in iNTS disease observed in Malawi is attributable to multiple public health interventions leading to reductions in malaria, HIV and acute malnutrition. Understanding the impacts of public health programmes on iNTS disease is essential to plan and evaluate interventions. Introduction Nontyphoidal Salmonellae (NTS) are responsible for a huge burden of bloodstream infection in Sub-Saharan African children. Recent reports of a decline in invasive NTS (iNTS) disease from Kenya and The Gambia have emphasised an association with malaria control. Following a similar decline in iNTS disease in Malawi, we have used 9 years of continuous longitudinal data to model the interrelationships between iNTS disease, malaria, HIV and malnutrition. Methods Trends in monthly numbers of childhood iNTS disease presenting at Queen's Hospital, Blantyre, Malawi from 2002 to 2010 were reviewed in the context of longitudinal monthly data describing malaria slide-positivity among paediatric febrile admissions, paediatric HIV prevalence, nutritional rehabilitation unit admissions and monthly rainfall over the same 9 years, using structural equation models (SEM). Results Analysis of 3,105 iNTS episodes identified from 49,093 blood cultures, showed an 11.8% annual decline in iNTS (p < 0.001). SEM analysis produced a stable model with good fit, revealing direct and statistically significant seasonal effects of malaria and malnutrition on the prevalence of iNTS disease. When these data were smoothed to eliminate seasonal Introduction Blood stream infection (BSI) caused by non-typhoidal Salmonella (NTS) is consistently reported as a major cause of morbidity and mortality in children across sub-Saharan Africa (SSA), especially those aged between 6 and 30 months. [1] As pathogens such as Haemophilus influenzae type b (Hib), Neisseria meningitidis serogroup A, and Streptococcus pneumoniae are targeted by highly effective protein-conjugate vaccines, life-threatening disease caused by invasive NTS (iNTS) is likely to become relatively more prominent [2]. The strong epidemiological association between malaria and iNTS disease has been documented in several African countries, including Malawi [1,3,4], and there is increasing biological evidence that multiple malaria-induced immune defects predispose to iNTS disease, including iron release from haem, impaired neutrophil function and reduction in IL12 production [5][6][7]. In addition, there are many other factors which may influence host susceptibility to invasive NTS disease among children, including inadequate protective antibody [8,9], malnutrition [10] and impaired cell-mediated immunity caused by HIV infection [11][12][13]. Recently, a temporal association between a decline in the incidence of malaria and a falling incidence of iNTS disease has been reported from both The Gambia and Kenya [14,15]. This has led to the suggestion that effective population-based malaria interventions might result in control of iNTS disease across the continent without the need for specific NTS-targeted measures. In Malawi, we have observed a fall in iNTS disease over 10 years of bacteraemia sentinel surveillance. Here, large-scale implementation of malaria control interventions gained considerable momentum in 2007; however, by 2010 these interventions had yet to impact on the incidence of mild and severe malaria [16]. Alongside this, there has been a highly effective rollout of antiretroviral therapy (ART) since 2004, with scale up of prevention of mother to child transmission (PMTCT) from 2006 [17,18]. In addition, a programme to subsidize fertilizer for subsistence farmers, which began in 2005 [19], has contributed to reductions in all measures of child malnutrition between 2004 and 2010 [20]. Finally, we have previously described a strong seasonal relationship between rainfall and iNTS disease [21]. We hypothesised that the underlying risk factors of rainfall, malnutrition and HIV, in addition to malaria, would be associated, directly and/or indirectly, with the observed changes in iNTS disease incidence. Many of these risk factors are known to interact. To assess the complex interrelationships of factors associated with iNTS disease, we have used structural equation modelling (SEM) to analyse longstanding surveillance data collected at the largest government hospital in Malawi from 2001-2010. The inter-relationships between the monthly numbers of malaria, malnutrition and HIV cases and their association with the corresponding monthly numbers of iNTS cases presenting over the same time period have been modelled in the context of rainfall levels which potentially have both direct environmental effects on NTS transmission through increased surface water and indirect effects on host susceptibility largely through malaria transmission and under-nutrition in the rainy season. Study site and population Queen Elizabeth Central Hospital (QECH) is a 1250-bed government funded hospital, serving a population of approximately 1 million. Approximately 50,000 children/year are assessed at QECH of whom approximately a quarter are admitted. Malaria transmission is endemic, with seasonal peaks during the rainy season [22]. Admissions for severe acute malnutrition (SAM) also peak during the rainy season, as increased infection risk coincides with the peak of the 'hungry season'; when household food supplies are running low whilst the new season's crops are growing. The prevalence of HIV in pregnant women was approximately 22% within Blantyre in 2001, declining to 16% by 2010 (Malawi Ministry of Health Quarterly HIV Programme Reports). Vertical transmission of HIV was estimated to occur in 17% of HIV infected pregnancies prior to 2008 and had declined to 13.5% by 2010 [23]. In Blantyre, 89% of HIV infected children died by the age of 3 years prior to the roll-out of ART [24]. Clinical management and laboratory methods The primary route of admission for children is the pediatric Accident and Emergency (A&E) Unit. All children presenting unwell to hospital with non-surgical illness have blood obtained for a thick blood-film malaria parasite examination, and, if slide-positive were diagnosed with malaria; no account was taken of severity in this study. Blood cultures are obtained from children in whom sepsis is suspected and the criteria for obtaining blood for culture did not change during the study period, nor did the numbers of cultures taken [25]. Automated blood culture was undertaken using a pediatric bottle (BacT/Alert PF; BioMerieux, UK) incubated at 37°C in air. Gram-negative isolates were identified using standard techniques; including over-night incubation on blood and MacConkey agar at 37°C in air, and if oxidase negative, identified by API 20E (BioMerieux, UK). Salmonellae were then serotyped as S. Enteritidis, S. Typhimurium, S. Typhi or S. sp. by the following antisera; polyvalent O & H, O4, O9, Hd, Hg, Hi, Hm, and Vi (Prolab Diagnostics, UK) [26]. Data abstraction Numbers of P. falciparum-positive blood-slides are recorded at the end of each month at the paediatric A&E unit at QECH and an analysis of trends of slide-positivity and of severe disease, from January 2001 to December 2010, has previously been undertaken [16]. To relate trends of malaria infection to indices of iNTS disease, all blood cultures collected from paediatric admissions during the same period were reviewed. Daily rainfall data (mm) were obtained for the Blantyre District from the Department of Climate Change and Meteorological Services, Malawi. As national nutritional data were only available for two time points, monthly admission numbers to the 'Moyo' Nutritional Rehabilitation Unit (NRU) at QECH were used as a proxy indicator of the incidence of SAM. The admission policy for the NRU did not change throughout the study period, following standard definitions of SAM: weight for height < 70% of the NCHS reference median and/or nutritional oedema, and/or a mid-upper arm circumference (MUAC) < 110 mm [27]. Admissions to the NRU almost all come through the paediatric A&E. Only rarely, if children are very sick or their malnutrition is initially missed, are they first admitted to the general wards and transferred later. Data estimates As data were unavailable for malaria for the fourth quarter of 2004 and NRU admissions data were unavailable for 2005, these data had to be estimated. As both variables are strongly seasonal, estimates were made for each missing month by calculating the mean of the corresponding month one year before and one year after. As there is evidence that risk of iNTS disease due to HIV-infection declines following effective ART, and that the vast majority of cases of paediatric iNTS disease occur in children under 3 years of age, an estimate was made for the number of children under 3 years in Blantyre with untreated HIV during the study period [28][29][30]. This estimate was made by taking the estimated number of HIV-infected pregnancies per year in Blantyre during the study period and multiplying this by the estimated incidence of vertical transmission, assuming a 1%/year fall from 18% in 2006 to 14% in 2010 (Government of Malawi, Ministry of Health: Quarterly HIV Program Reports (2005-2014) https://www.hiv.health.gov.mw/index.php/our-documents) [23]. Malawi Ministry of Health ART programme data were used to estimate the number of children in Blantyre on effective ART, based on the programme starting in 2006 and reaching 30% coverage by 2010, and estimating that ART achieved effective protection against iNTS disease in 70% of recipients. Mortality was estimated at 30%/year in the first three years of life for those not on ART, based on published studies from Malawi (S1 Table) [24]. Statistical methods The total numbers of iNTS cases, malaria cases and admissions to the NRU were computed, along with the total rainfall (in mms), for each month of the study period (Fig 1). As the predictor variables (rainfall, malnutrition, HIV and malaria prevalence) were inter-related, a series of structural equation models (SEM) were fitted to the data [31]. SEM allows a much more complex set of hypothesised inter-relationships to be explored between variables than is possible with standard multivariable linear regression methods. The latter requires the assumption that a set of predictor variables are independently associated with (usually a single) outcome variable. For the specific context of the objectives of this study, SEM methods construct a Bayesian network comprising nodes, representing the variables selected for investigation, linked by arrows indicating probabilistic relationships. A standardised regression coefficient for each line, calculated by the software, indicates the relative contribution of each relationship. This network allows the model to describe additional relationships between the multiple predictor variables. In addition, since the probability and regression coefficient represented by arrows in the model varies according to the direction of the arrow, SEM methods provide stronger evidence for interpreting variable parameters as causal effects, when analysing data from crosssectional studies, than is the case with multivariable linear regression models. [32]. A simple graphical examination of the incidence estimates for iNTS, malaria, malnutrition and rainfall showed the expected strong monthly cyclical seasonal patterns along with year-onyear reductions, while HIV prevalence did not show a seasonal pattern (Fig 1). In the first SEM, the monthly numbers of iNTS cases, malaria cases, HIV cases, NRU admissions and rainfall levels were analysed in order to model cyclical monthly variations in these variables. In the second SEM, however, the data were smoothed by taking a 12-month rolling means in order to remove the effect of month-by-month seasonality and hence evaluate the impact of variables acting on iNTS disease over longer periods, in particular HIV-infection. Goodness-of-fit of each SEM was determined using a chi-square test and the root mean square error of approximation (RMSEA) statistic [33]. All SEMs were constructed using the IBM SPSS Amos (release 20.0) software package. For this exploratory model, statistical significance was set at an alpha of 10%. Results A total of 49,093 blood cultures were taken between January 2001 and December 2010. Of these, 21% (n = 10,265) yielded isolates of clinical significance and 7,244 (15%) yielded likely contaminants. NTS were isolated from 3,105 (30%) of 10,265 significant blood cultures (Fig 1). The iNTS disease epidemic peaked in 2002. Monthly malaria diagnostic data from the paediatric A&E unit were available for the same period, except from October to December 2004. A total of 242,953 slides were taken for malaria between January 2001 and December 2010 and 61,320 (25.2%) of these were found to be P. falciparum positive. Totalled by year (Table 1 and Fig 1), iNTS incidence fell on average by 11.8% annually (p<0.001), while malaria incidence fell on average by 4.0% per year (p < 0.001) largely due to a fall between 2001 and 2003; admissions to the NRU fell on average by 5.8% per year (p = 0.143). There were marked variations in the average rainfall levels over the study decade but no evidence of any systematic trend, either upwards or downwards (p = 0.927). Structural equation modelling We hypothesised that the control of multiple conditions associated with increased risk of iNTS disease, including malaria, HIV and malnutrition, was likely to have led to a decline in iNTS disease. SEM were therefore constructed from the peak in of the epidemic in 2002 to 2010 (Fig 2 and Fig 3 and S1 Table for complete monthly data). The relationships between variables, including seasonal variations, were explored by modelling monthly data describing cultureconfirmed iNTS disease, slide-positive malaria cases, NRU admissions (representing malnutrition), rainfall and HIV (Fig 2). This demonstrated statistically significant and direct contributions to iNTS disease from both malaria and malnutrition. There was also a non-directional correlation between malaria and malnutrition. In this model rainfall had no direct impact upon iNTS disease, but had a strong and significant impact on iNTS disease through its effects upon malaria and also upon malnutrition, lending biological plausibility to the model. This model suggested that whilst HIV had no direct effect on iNTS disease, it indirectly contributed to iNTS disease through its effect upon malnutrition. Time was found to have statistically significant negative relationships with some variables, suggesting that other factors outwith the model were contributing to the marked decline of malaria and HIV disease in Blantyre, as indicated by high standardised regression coefficients. The smaller negative effect of time upon In order to investigate long-term non-seasonal trends in iNTS disease, a second SEM was constructed, this time smoothing seasonality out of the data (iNTS disease, malaria and malnutrition) by taking a 12-month rolling mean (Fig 3). This enabled the effect of HIV prevalence, which is not seasonal and which changes over longer time periods, to be better evaluated. Rainfall did not contribute any statistically significant relationships once month-to-month seasonality was removed, and was therefore not included in this model. As is the case in the first model, both malaria and malnutrition continued to exhibit major direct relationships with iNTS disease, and the previous non-directional correlation also disappeared. In this model, however, HIV also demonstrated a direct, significant effect upon iNTS disease in addition to an indirect one through its contribution to malnutrition. Similar time effects on iNTS disease, malaria and HIV were seen in this model. In this model, the standardised regression coefficient for the association between time and numbers of iNTS cases presenting was approximately 50% of the corresponding coefficient for the first model, suggesting that there were fewer unexplained factors in this model acting on the observed decline in iNTS cases. Once again, in addition to the statistically significant interactions of the variables within the model, the overall model fit was good (chi-square(2) = 1.121, p = 0.571; RMSEA < = 0.001 (90% CI: <0.001-0.162). Discussion In many populations in SSA, we are largely ignorant of the burden of iNTS disease, impacts of potential risk factors, routes of transmission, and long-term trends [2]. Without such knowledge, it is impossible to plan and evaluate interventions. Analysis of very large data sets from Blantyre, Malawi reveals a significant and sustained decline in iNTS disease, following an epidemic peak between 2001-2004. There has not been a comparable or sustained decline in the malaria slide positivity rate since that described between 2001 and 2003 [16], a finding which is corroborated across Malawi by WHO surveillance [34]. Whilst our data are from a single hospital setting and might have failed to appreciate a decline in mild malaria in the community, both admissions with cerebral malaria, and asymptomatic parasitaemia on an orthopaedic unit, reflected clinical malaria cases throughout the study period [16]. We therefore hypothesized that the decline in paediatric malaria cases from a peak in 2001 to the beginning of a plateau in 2003 had contributed to the initial observed fall in iNTS, but that subsequently, other factors such as the prevalence of HIV and malnutrition were also important contributors. We used SEM analytical methods to unravel this complex relationship, as the factors associated with iNTS disease are interrelated and cannot be addressed using standard regression methods. This allows the possibility of exploring an indirect association between a predictor variable and the outcome variable through additional predictor variables, therefore allowing for mediating and confounding effects to be modelled [31]. Malaria undoubtedly contributed significantly and directly to the decline in iNTS disease, consistent with the experiences reported in the Gambia and Kenya [14,15], and was the variable that was most strongly associated with iNTS disease in both models. Our exploratory modelling, however, clearly demonstrated that other factors are associated with reductions in iNTS disease, whether by direct or indirect mechanisms or both. It is clear that HIV and malnutrition contribute to changes in the incidence of iNTS disease, through both direct and indirect effects. Seasonal rainfall, by contrast, exerted a complex and important indirect effect on iNTS disease through its contribution to both malaria and malnutrition. HIV was previously associated with 18% of childhood iNTS disease in a study of Kenyan children [10]. Several developments in Malawi have reduced the prevalence of HIV infection among young children in Blantyre, including extensive anti-retroviral therapy (ART) roll-out since 2005 and a national program of peri-partum ART to reduce mother-to-child transmission of HIV, and we hypothesised that this has had an impact on iNTS disease [23]. In the SEM displaying seasonal variation, HIV prevalence in Blantyre indirectly impacted upon iNTS through its effect upon malnutrition, highlighting the strength of this modelling methodology. When the effect of month-on-month seasonality was smoothed out to account for the lack of seasonality of HIV, it retained a very strong influence on malnutrition, and had an additional direct, significant effect upon iNTS disease. This model suggested that the combined effects of HIV and malnutrition were of similar magnitude to that of malaria on iNTS incidence. The prevalence of underweight-for-height status in children aged <5 years in Malawi, which was static between 2000-2004, has declined from 6% in 2004 [35] to 4% in 2010 [20], and this reduction was reflected in the decline in NRU admissions to QECH. SEM using NRU admissions revealed that nutrition has strong interactions with both rainfall and HIV. The smoothed model suggested that approximately half NRU admissions were explained by variations in HIV-prevalence. Of note, the positive standardised regression from year to NRU admissions, suggests that NRU admissions are gradually increasing, once reductions due to other factors in the model such as HIV and iNTS disease are taken into account, emphasising the complex and multi-causal epidemiology of malnutrition. In contrast, the model also reflects the very significant reductions over time of both malaria and HIV once factors outwith the model are accounted for, a finding that we interpreted as reflecting the known successes of recent control programmes for malaria and HIV in Malawi. Limitations These observational data come from a single centre within Blantyre, and it is likely that some young children die from unidentified and untreated iNTS in the community. Furthermore, we do not have a longitudinal survey reflecting changes in patterns of health service utilisation from this surveillance period. Other factors such as changing availability of medications such as antibiotics or anti-malarials might also have exerted an unseen or proxy effect on our data. Our use of NRU admissions as a proxy indicator for population nutrition status means that nutritionrelated observations must be interpreted with care as the NRU only admits cases of SAM. These do not necessarily reflect population distributions (particularly after mid 2008 when community-based SAM treatment began in Blantyre district) or the impact of the two other forms of malnutrition, underweight and stunting. Nonetheless, they do reflect the national data relating to all forms of under-nutrition-and as the models presented use actual hospital admission figures, they accurately measure disease burden on health facilities in the study area. Whilst our models have good statistical strength, the changes in iNTS disease that were directly attributed to time indicate that factors outwith the model also affect iNTS disease in Malawi. In particular, it is likely that access to drinkable water and to sanitation and hygiene (WASH) facilities affect iNTS disease and we have not been able to include data reflecting changes in provision of WASH facilities in Blantyre over the study period in the model. Conclusions Our data suggest that the interaction between iNTS disease and malaria, HIV and malnutrition is complex, and that the observed decline in iNTS disease is likely to have been due to multiple public health interventions. We estimate that slightly less than half of this change is explained by a decline in malaria and that a similar proportion was explained by changes in the local epidemiology of HIV, both directly and through its impact on malnutrition. We illustrate the potential of modelling methodology and sentinel surveillance data to inform the use of public health interventions to reduce iNTS disease in SSA. The model indicates some gaps in the data, suggesting that there are other unknown factors that also appear to influence the incidence of iNTS disease. It is therefore likely that direct interventions such as NTS vaccines or improvements in WASH facilities will be required to match recent achievements seen in the control of other severe life-threatening bacterial infections such as Haemophilus influenzae Group b, pneumococcal and meningococcal disease.
2018-04-03T01:57:30.909Z
2015-07-01T00:00:00.000
{ "year": 2015, "sha1": "c0c8c45ebe90d16d02167a1e9996b55417771be1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0003979&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ed0e3e5a84b0c84c4c75bf41e98be73b3b87e89", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
70061926
pes2o/s2orc
v3-fos-license
Ebstein Anomaly in a 60 Years Patient: A Lucky Finding Ebstein anomaly is a rare congenital disease of the tricuspid valve (<1%) diagnosed at all ages. A single case of an 85 y old patient was reported in 1979 as the longest survival with ebstein anomaly who had no cardiac symptoms until 79 years old. The aim of this case report is to highlight the need for an early echocardiographic diagnosis of this disease to prevent sudden death from arrhythmias or other complications because as we see patients with ebstein anomaly can live a healthy long life asymptomatic. The patient described in this case is a 60 years old male diabetic and heavy smoker, who presented to the cardiology department with fatigue and atypical angina with dyspnea on moderate effort. Cardiac ultrasound was in favor of an isolated Ebstein anomaly type A, with partial atrialization of the right ventricle (RV) with an adequate volume of the right ventricle (17cm 2 ) and no specific other associated anomalies. Symptoms described by the patient were purely pulmonary due to a mild obstructive disease. Patient was diagnosed with chronic obstructive lung disease due to his smoking habits. Reaching this age and being asymptomatic with conserved RV and LV function is a sign of good outcome. This case was an interesting lucky finding. It was astonishing to see a patient surviving this anomaly at 60 years old asymptomatically. Introduction Ebstein anomaly is a congenital disease of the tricuspid valve, associated with a displacement of septal and posterior tricuspid leaflets toward the apex of the right ventricle (RV). It results in an atrialization of the right ventricle. It's a rare abnormality diagnosed at all ages. The age at presentation depend on the presence of associated diseases and severity of the tricuspid regurgitation [1]. There is an increase in the prenatal diagnosis of this disease due to the development of this anomaly in fetal life. 1 in 20000 babies are diagnosed with ebstein anomaly (<1%). [1] It can be medically induced by benzodiazepines, antihypertensive drugs, valproic acid, marijuana, and organic solvents through pregnancy. The association with lithium was disputed. Some new emerging data about maternal β-thalassemia need to be confirmed [2]. A single case of an 85 y old patient was reported in 1979 as the longest survival with ebstein anomaly who had no cardiac symptoms until 79 years old. Older age patients often present with palpitations due to arrhythmias or mild symptoms such as exertional dyspnea and sometimes fatigue. Myocardial fibrosis of the right ventricle due to this anomaly was a predictor of high rate of arrhythmias and even complicated by heart failure symptoms. The key to prevent arrhythmias and other complications is early diagnosis in adult patients with long term and good outcomes. [1] Clinical Case A 60 years old patient, with a history of diabetes mellitus 7 years ago, on glimepiride 4mg daily not very well controlled on treatment, dyslipidemia on simvastatin and a heavy smoker presented for chest pain. 1 month ago patient complained of fatigue and atypical angina. His pain was rarely oppressive not irradiating to his left arm neither associated with diaphoresis sometimes exacerbated with exertion. Furthermore, the patient noticed some episodes of intermittent dyspnea on moderate effort. He also complained of chronic fatigue and weight loss since 2 years. He had normal vital signs. On physical examination, S1 and S2 were normal with no associated murmurs. Lungs were clear but some bilateral basal wheezing. His chest x-ray revealed a normal heart size with overinflation and flattening of the diaphragm. Patient is in sinus rhythm with no ST-T changes, preexcitation or any other ECG modifications. Cardiac ultrasound was in favor of an isolated ebstein anomaly type A (Carpentier et al 1988), with partial atrialization of the right ventricle (RV) with an adequate volume of the right ventricle (17cm2) and no specific other associated anomalies (no RV infundibular obstruction, pulmonary atresia or shunts), but the interatrial septum is mildly aneurysmal. The rest of the echo revealed a normal LV function, normal strain (GLPS average= -18%), normal diastolic function with normal LVEDP. The right ventricle is not dilated. Tricuspid regurgitation is grade I associated to a Systolic pulmonary artery pressure (SPAP= 16mmHg) and an anomalous motion of the interventricular septum. In addition, a tricuspid valve apical displacement >8mm/m 2 (in this case it's =17mm/m 2 ) was pathognomonic in this anomaly [ figure 1]. Cardiac catheterization was done to rule out any associated coronary anomalies. It revealed normal coronary arteries. Discussion The long term follow up of this anomaly is rare due to the low prevalence of the disease with high variation in its anatomic components. In a retrospective review of 51 patients with ebstein anomaly, the mean age at diagnosis was 21 years +/-21. The overall survival in this single center study demonstrated 100% survival to 40 years, 95% to 50 years and 81% to the age of 60. [3]. In patients with mild Ebstein anomaly, survival was reported to the ninth decade. A single case of an 85 y old patient was reported in 1979 as the longest survival with ebstein anomaly who had no cardiac symptoms until 79 years old. These patients may be asymptomatic for a long time. Patients with Ebstein's anomaly who reach late adolescence and adulthood often had a long term outcome. In the other hand, fetal and neonatal presentations have a poor outcome. [4] Our patient was at 60 years old when he presented for a cardiac workup. Reaching this age and being asymptomatic with conserved RV and LV function is a sign of good outcome. Ebstein anomaly includes many features associating the adherence to the myocardium of the tricuspid valve leaflets with an apical displacement of the septal and posterior leaflets. Atrialization of the right ventricle with different degrees, involving a redundant and fenestrated anterior leaflets. Various degree of the tricuspid regurgitation and cyanosis with different presentations. In addition, dilation of the right atrium could be also included. Other echocardiographic anomalies could be affiliated to this anomaly as ASD or PFO, VSD, mitral valve prolapse, RVOT obstruction and LV anomalies. Electrophysiological anomalies as accessory pathways and atrial tachycardia may be involved and catheter ablation could be a solution [5,6,7]. Heart failure is also a severe consequence of ebstein anomaly. In this case report, the patient had no associated congenital malformation nor arrhythmias. [8]. A study on 37 patients with ebstein anomaly in adulthood aged 43.0 ± 14.4 years revealed that fibrosis was the initial cause of arrhythmias and heart failure assessed by LGE on cardiac MRI. 20% of patients reaching adulthood died from arrhythmias, whereas 50% died from heart failure [9]. Sudden cardiac death may occur in asymptomatic patients with mild disease and normal cardiothoracic index on chest-X-ray. There is no evidence for tricuspid surgery in reducing the incidence of sudden cardiac death in asymptomatic patients. Risk of surgery is high [10]. In this case, patient was treated symptomatically as there is no evidence to prove that survival will be better after surgery. In 1999, a case report revealed an association between ebstein anomalies with myocardial bridging and anomalous coronary artery [11]. As for this case a cardiac catheterization didn't reveal any bridging, his symptoms were purely pulmonary with mild obstructive pattern on PFTs. The standard diagnostic tool for this entity is the transthoracic echocardiography [12]. Magnetic resonance imaging (MRI) is not used routinely [8]. On the other hand, associating Cardiac MRI with transesophageal echocardiography and rarely cardiac catheterization provide additional information for surgical decision making [13]. Carpentier et al. classified the ebstein anomaly in 1988 in 4 types; type A with an adequate volume of the true RV, type B with large atrialized RV but a freely moved anterior leaflet, type C associating an obstruction of the RVOT with a severely restricted anterior leaflet and type D embracing an almost complete atrialization of the RV. A prospective study on thirty -two patients with ebstein anomaly concluded that it seems that size of the RV depend on the degree of the tricuspid regurgitation what it look to be in the described case above [14]. Furthermore, dilatation of the right heart with decreased contractility are features of poor prognosis. The echocardiographic features for the diagnosis of this anomaly embrace: on M-mode, a paradoxical septal motion with delayed closure of tricuspid valve leaflets. On the other hand, on Two-dimensional echocardiography an apical displacement more than 8mm/ m 2 of the septal leaflet with eccentric leaflet coaptation. Same features were present in the echocardiography of our case and helped to put the diagnosis [8]. Let's also notice that the electrocardiogram of the patient was normal with no associated signs of delayed or prolonged duration of the RV which is a representative of a mild disease. [15] Conclusion In the end, we described a rare case of ebstein anomaly who presented at 60 years old for cardiologic assessment. Patient had some atypical chest pain with dyspnea on exertion that were mainly explained by an obstructive pattern because the patient was a heavy smoker. His echocardiography revealed an ebstein anomaly with conserved RV and LV function with no dilatation nor a significant tricuspid regurgitation. It was an interesting lucky finding. It was an astonishing case surviving this anomaly at 60 years old asymptomatically. This case highlight the need for further preventive and early diagnosis strategies to prevent complications and sudden cardiac death in adults with ebstein anomalies as arrhythmias and congestive heart failure, because as we noticed it's a disease if early diagnosed and followed up carefully can remain asymptomatic for a long time with a very good prognosis in adults.
2019-02-19T14:07:41.105Z
2018-10-30T00:00:00.000
{ "year": 2018, "sha1": "cb571b5f02f31677534d49e614fb1e8e94472dfd", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ccr.20180203.14.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2bfae0582b06c44be6f76527b986d1ca50d95ea1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
248273318
pes2o/s2orc
v3-fos-license
Suppression of Oxidative Stress and Proinflammatory Cytokines Is a Potential Therapeutic Action of Ficus lepicarpa B. (Moraceae) against Carbon Tetrachloride (CCl4)-Induced Hepatotoxicity in Rats Local tribes use the leaves of Ficus lepicarpa B. (Moraceae), a traditional Malaysian medicine, as a vegetable dish, a tonic, and to treat ailments including fever, jaundice and ringworm. The purpose of this study was to look into the possible therapeutic effects of F. lepicarpa leaf extract against carbon tetrachloride (CCl4)-induced liver damage in rats. The DPPH test was used to measure the antioxidant activity of plants. Gas chromatography-mass spectrometry was used for the phytochemical analysis (GCMS). Six groups of male Sprague-Dawley rats were subjected to the following treatment regimens: control group, CCl4 alone, F. lepicarpa 400 mg/kg alone, CCl4 + F. lepicarpa 100 mg/kg, CCl4 + F. lepicarpa 200 mg/kg and CCl4 + F. lepicarpa 400 mg/kg. The rats were euthanized after two weeks, and biomarkers of liver function and antioxidant enzyme status were assessed. To assess the extent of liver damage and fibrosis, histopathological and immunohistochemical examinations of liver tissue were undertaken. The total phenolic content and the total flavonoid content in methanol extract of F. lepicarpa leaves were 58.86 ± 0.04 mg GAE/g and 44.31 ± 0.10 mg CAE/g, respectively. F. lepicarpa’s inhibitory concentration (IC50) for free radical scavenging activity was reported to be 3.73 mg/mL. In a dose-related manner, F. lepicarpa was effective in preventing an increase in serum ALT, serum AST and liver MDA. Histopathological alterations revealed that F. lepicarpa protects against the oxidative stress caused by CCl4. The immunohistochemistry results showed that proinflammatory cytokines (tumour necrosis factor-α, interleukin-6, prostaglandin E2) were suppressed. The antioxidative, anti-inflammatory, and free-radical scavenging activities of F. lepicarpa can be related to its hepatoprotective benefits. Introduction The liver is the largest organ in the body; it accounts for around 2% of adult body weight, is a dark reddish-brown in colour, is shaped like a cone and weighs roughly 1.5 kg. The liver is found on top of the stomach, right kidney, and intestines in the upper right-hand region of the abdominal cavity, beneath the diaphragm [1,2]. It has an incredible number of functions that support the function of other organs and have an impact on all physiologic systems [3]. Every year, approximately 2 million people die from liver disease around the world [4]. The Global Health Estimates 2019-2020 by the World Health Organization (WHO) noted liver diseases as the 11th most common cause of death worldwide; liver diseases were ranked the 9th most common cause of death in 2019, increasing to the 8th most common cause in the year 2020 in South East Asia (SEA). In Malaysia, it was the 7th leading cause of death over the years 2010-2019 [5,6]. F. lepicarpa's In Vitro Antioxidative Activity Total phenolic content (TPC) concentrations in F. lepicarpa methanol extracts were calculated using the equation (y = 4.268x + 0.0436, r 2 = 0.9939) as gallic acid equivalents (GAE mg/g of extract). TPC in F. lepicarpa was found to be 58.86 0.04 mg GAE/g. TFC of F. lepicarpa were estimated using the equation (y = 3.35x − 0.019, r 2 = 0.9937) and expressed as catechin equivalents (CAE mg/g). Total flavonoid content (TFC) obtained was 44.31 ± 0.10 mg CAE/g. The methanol extract antioxidant activity was calculated using its DPPH scavenging activity (a free radical compound). The ability of F. lepicarpa extract to scavenge DPPH radicals increased with concentration. At a dosage of 3.73 mg/mL, the methanolic extract of F. lepicarpa scavenged DPPH radicals up to 50% (IC 50 ). F. lepicarpa Preliminary Phytochemical Screening As indicated in Table 1, preliminary phytochemical screening of F. lepicarpa methanol extract revealed the presence of various phytochemicals such as flavonoids, phenols, saponin, steroids, phytosterols, and triterpenoids, whereas alkaloids, tannins, and anthraquinones were undetectable. Liver Index and Body Weight The body weight of the rats was measured every week until they were euthanized. Table 3 shows the body weight, percentage increase in body weight, and liver index of rats from each group. Liver index is the ratio of weight of liver and body weight of the rats expressed as percent of the somatic weight. It is a biomarker that indicates the status of feeding and metabolism. The presence of a large liver signified a high level of metabolic activity, whereas a tiny liver could indicate a shortage of food [21]. The CCl 4 -treated group was found to have a higher liver index than the control group. However, the differences found were not statistically significant. The liver index for the control and plant control groups did not differ significantly. Body weight was reduced as a result of CCl 4 . Animals given F. lepicarpa extracts (100, 200, and 400 mg/kg bwt) for two weeks showed a moderate increase in body weight. The proportion of weight gain in the CCl 4 group was found to be very low when compared to the control and plant control groups. ALT and AST are the most common liver damage markers detected in the blood. Elevations in these values, which reflect liver function, suggest liver damage. A decrease in these liver enzymes activities itself is not considered to be toxicologically significant. Table 4 shows that serum indicators in the CCl 4 group were substantially higher (p < 0.05) as compared to the control and plant control groups. On the marker, the control and plant control groups had identical values and were within the typical laboratory range [22]. Animals given different doses of plant extracts, on the other hand, showed a significant decline (p < 0.05) in marker levels before recovering in a dose-dependent manner. F. lepicarpa's Effect on Liver Reduced Glutathione (GSH) Reduced GSH is an intracellular non-enzymatic antioxidant that aids in the defence against free radicals in the liver. Under oxidative stress, GSH levels in tissues are significantly reduced. As shown in Table 5, the level of GSH in the CCl 4 group decreased by 25% (p < 0.05) when compared to the control and plant control groups. When compared to the CCl 4 treated group, the group pre-treated with F. lepicarpa (100, 200, and 400 mg/kg bwt) demonstrated a dose-related increase in GSH, resulting in a recovery of 42, 44, and 59 percent, respectively. F. lepicarpa's Effect on Liver Lipid Peroxidation (LPO) Peroxidation of lipids is a form of oxidative damage to polyunsaturated lipids caused by the reaction of lipids and oxygen, which results in the formation of free radicals and peroxides, which cause cell and organelle damage [23]. The ultimate product of lipid peroxidation is malondialdehyde (MDA). The MDA level in liver homogenate can be examined to assess the degree of liver damage. When compared to the control group, the LPO level in the CCl 4 treatment group was substantially higher (p < 0.05), as shown in Table 5. The control group and plant control group showed similar results. The group pretreated with F. lepicarpa (100, 200 and 400 mg/kg bwt) extract showed lower MDA formation by 85%, 71% and 53%, respectively, in a dose-related manner. Higher dosage treatment group showed a significant decrease (p < 0.05) in MDA formation when compared to the CCl 4 model treatment group. Increased protection against LPO by plant extracts suggests that either the activation or reaction of oxygen and lipids to form free radicals and peroxide enzymes may be inhibited. The chain initiation and propagation, as well as acceleration of chain termination, may have inhibited free radical-mediated lipid peroxidation. Enzymatic antioxidants play an essential role in the cell defence system, especially during the healing process. The antioxidant enzymes (GPx and GR) and phase II metabolising enzymes (GST and QR) were studied to learn more about the mechanism of F. lepicarpa extract's protection against CCl 4 liver damage. Table 6 shows the levels of antioxidant enzymes GPx, GR, GST, and QR in various treatment groups. According to the table, all enzymes in the CCl 4 model-treated group were significantly depleted (p < 0.05) due to increased activity. The plant control group for F. lepicarpa (400 mg/kg bwt) had increased antioxidant enzyme activity due to antioxidant supplementation from the plant itself. The antioxidant enzymes were determined to be restored in a dosage-dependent manner in the F. lepicarpa -CCl 4 treated groups; as a result, enzymatic activity increased significantly (p < 0.05). Pre-treatment group with F. lepicarpa (100, 200 and 400 mg/kg bwt) extracts showed recovery in the antioxidant enzymes GPx (40%, 61% and 63%, respectively), GR (49%, 68% and 85%, respectively), GST (54%, 67% and 75%, respectively) and QR (71%, 88% and 89%, respectively) compared to the CCl 4 model group. F. lepicarpa's Effect on Liver Histopathology Histopathological changes in liver cells can provide evidence of the biochemical effects of liver protection. As shown in Figure 2A-F, alterations in the histopathology of liver damage were observed, including cell necrosis (N), fatty degeneration (FD), mononuclear cell infiltration (MCI), increase in sinusoidal space (S), cell derangement (CD) of hepatocytes (H) and congestion of the central vein (CV). In the saline control group ( Figure 2A) and plant control group ( Figure 2F), it was discovered that the central vein, hepatocytes, and sinusoidal space were all normal. In contrast, the CCl 4 model group ( Figure 2B) resulted in severe histopathological alterations, including massive hepatocytes necrosis, steatofibrosis characterized by fatty degeneration or accumulation in hepatocytes, increased mononuclear infiltration, increased sinusoidal space forming fibrous bridges between cells due to formation of liver pseudo lobules and an abnormal asymmetrical location of central veins. However, in the group pre-treated with F. lepicarpa-CCl 4 ( Figure 2C-E), these changes were attenuated in a dosage-dependent manner. The inflammation of liver cells and fatty degeneration was significantly reduced, particularly at 400 mg/kg bwt ( Figure 2E). The hepatocyte cord showed less disarray, as well as the repair of cellular boundaries. corresponding values for normal control (* p < 0.05). ** Values differ significantly from the corresponding values for the CCl4 alone treated control (** p < 0.05). F. lepicarpa's Effect on Liver Histopathology Histopathological changes in liver cells can provide evidence of the biochemical effects of liver protection. As shown in Figure 2A-F, alterations in the histopathology of liver damage were observed, including cell necrosis (N), fatty degeneration (FD), mononuclear cell infiltration (MCI), increase in sinusoidal space (S), cell derangement (CD) of hepatocytes (H) and congestion of the central vein (CV). In the saline control group ( Figure 2A) and plant control group ( Figure 2F), it was discovered that the central vein, hepatocytes, and sinusoidal space were all normal. In contrast, the CCl4 model group ( Figure 2B) resulted in severe histopathological alterations, including massive hepatocytes necrosis, steatofibrosis characterized by fatty degeneration or accumulation in hepatocytes, increased mononuclear infiltration, increased sinusoidal space forming fibrous bridges between cells due to formation of liver pseudo lobules and an abnormal asymmetrical location of central veins. However, in the group pre-treated with F. lepicarpa-CCl4 ( Figure 2C-E), these changes were attenuated in a dosage-dependent manner. The inflammation of liver cells and fatty degeneration was significantly reduced, particularly at 400 mg/kg bwt ( Figure 2E). The hepatocyte cord showed less disarray, as well as the repair of cellular boundaries. F. lepicarpa's Effect on Liver Proinflammatory Cytokines An inflammatory response is elicited by CCl4-induced liver injury. This activates Kupffer cells, which mediate the action of cytokines such as interleukin-6 (IL-6), tumour necrosis factor-alpha (TNF-α) and prostaglandin E2 (PGE2). Due to this, the effect of F. lepicarpa on proinflammatory marker expression for TNF-α, IL-6 and PGE2 was also investigated ( Figure 3, Figure 4 and Figure 5, respectively). The immuno-staining on the liver section of the saline control (Figures 3A, 4A and 5A) and plant control groups (F. lepicarpa (400 mg/kg bwt)) ( Figures 3F, 4F and 5F) showed a complete absence of intense brown colouration, which indicates that the markers did not present in the cells; thus, no liver cell damage was detected. However, compared to the control group, the CCl4 model group ( Figures 3B, 4B and 5B) showed an intense brown colouration, which demonstrates that marker-positive cells were present. In contrast, a decrease in the appearance of the intense brown colouration of these markers was observed in F. lepicarpa-CCl4 treated group in a dosage-dependant manner (Figures 3C-E, 4C-E and 5C-E). F. lepicarpa's Effect on Liver Proinflammatory Cytokines An inflammatory response is elicited by CCl 4 -induced liver injury. This activates Kupffer cells, which mediate the action of cytokines such as interleukin-6 (IL-6), tumour necrosis factor-alpha (TNF-α) and prostaglandin E 2 (PGE 2 ). Due to this, the effect of F. lepicarpa on proinflammatory marker expression for TNF-α, IL-6 and PGE 2 was also investigated (Figures 3-5, respectively). The immuno-staining on the liver section of the saline control ( Figures 3A, 4A and 5A) and plant control groups (F. lepicarpa (400 mg/kg bwt)) ( Figures 3F, 4F and 5F) showed a complete absence of intense brown colouration, which indicates that the markers did not present in the cells; thus, no liver cell damage was detected. However, compared to the control group, the CCl 4 model group ( Figures 3B, 4B and 5B) showed an intense brown colouration, which demonstrates that marker-positive cells were present. In contrast, a decrease in the appearance of the intense brown colouration of these markers was observed in F. lepicarpa-CCl 4 treated group in a dosage-dependant manner ( Figure F. lepicarpa's Effect on Liver Proinflammatory Cytokines An inflammatory response is elicited by CCl4-induced liver injury. This activates Kupffer cells, which mediate the action of cytokines such as interleukin-6 (IL-6), tumour necrosis factor-alpha (TNF-α) and prostaglandin E2 (PGE2). Due to this, the effect of F. lepicarpa on proinflammatory marker expression for TNF-α, IL-6 and PGE2 was also investigated (Figure 3, Figure 4 and Figure 5, respectively). The immuno-staining on the liver section of the saline control ( Figures 3A, 4A and 5A) and plant control groups (F. lepicarpa (400 mg/kg bwt)) ( Figures 3F, 4F and 5F) showed a complete absence of intense brown colouration, which indicates that the markers did not present in the cells; thus, no liver cell damage was detected. However, compared to the control group, the CCl4 model group ( Figures 3B, 4B and 5B) showed an intense brown colouration, which demonstrates that marker-positive cells were present. In contrast, a decrease in the appearance of the intense brown colouration of these markers was observed in F. lepicarpa-CCl4 treated group in a dosage-dependant manner (Figures 3C-E, 4C-E and 5C-E). Discussion This investigation set forth to identify the phytochemical contents of F. lepicarpa plants, as well as their potential to provide a hepatoprotective effect on induced liver injury using CCl4 in Sprague-Dawley rats. These plants were chosen for their therapeutic characteristics, which have been widely utilised as traditional medicine in local communities to treat a range of illnesses, including jaundice [18], ringworm infection [20], and fever [17]. These plant products are also consumed as tonics, vegetables, and edible ripe fruits, in addition to being utilised as alternative medicine. Despite their widespread use as a medicinal and food source, there is still a dearth of scientific evidence about these plants in the literature. The drying method of fresh leaves of F. lepicarpa involved maintaining a temperature of 40 °C to prevent the degradation of the bioactive compounds. The selection of drying methods for F. lepicarpa was made due to a previous study that dried the leaves under natural conditions. The leaves turned out to be brown and blackish in colour, and this, in turn, affected the total phenolic and total flavonoid assay as well the antioxidant assay [18]. The selection of the drying method prior to further experimentation is important because it will affect the outcome of the downstream research. Drying is regarded as the most crucial step in the post-harvest process due to its importance in limiting enzymatic degradation and microbial growth while preserving the plant′s beneficial properties [24]. The drying procedure alters the chemical composition of the herbal medical preparation, Discussion This investigation set forth to identify the phytochemical contents of F. lepicarpa plants, as well as their potential to provide a hepatoprotective effect on induced liver injury using CCl 4 in Sprague-Dawley rats. These plants were chosen for their therapeutic characteristics, which have been widely utilised as traditional medicine in local communities to treat a range of illnesses, including jaundice [18], ringworm infection [20], and fever [17]. These plant products are also consumed as tonics, vegetables, and edible ripe fruits, in addition to being utilised as alternative medicine. Despite their widespread use as a medicinal and food source, there is still a dearth of scientific evidence about these plants in the literature. The drying method of fresh leaves of F. lepicarpa involved maintaining a temperature of 40 • C to prevent the degradation of the bioactive compounds. The selection of drying methods for F. lepicarpa was made due to a previous study that dried the leaves under natural conditions. The leaves turned out to be brown and blackish in colour, and this, in turn, affected the total phenolic and total flavonoid assay as well the antioxidant assay [18]. The selection of the drying method prior to further experimentation is important because it will affect the outcome of the downstream research. Drying is regarded as the most crucial step in the post-harvest process due to its importance in limiting enzymatic degradation and microbial growth while preserving the plant s beneficial properties [24]. The drying procedure alters the chemical composition of the herbal medical preparation, lowering its quality [25]. Previous research shows that the selection of drying methods on chrysanthemum flower heads shows the advantages of oven drying over sun drying and shade drying in terms of the TPC, TFC and antioxidant properties [26]. The method of extraction and selection of solvent in the extraction are required based on the targeted bioactive compound and the plants type [27,28]. Earlier reports show that flavonoid and phenolic compounds were abundant in methanol extraction, while carotenoids and capsaicinoids were found to be abundant in hexane extraction [29]. This stressed that selectivity is significantly tied to the target compound s solubility in the solvent itself. The methanol extract of F. lepicarpa leaves contained elevated levels of TPC and TFC as well as the ability to scavenge free radicals as measured by DPPH assay. Plant phenolics and flavonoids are phytochemical substances present in both edible and inedible plant components that have been shown to have a variety of biological functions, in which antioxidant and anti-inflammatory properties are included. Because of their redox properties, phenolic compounds can act as singlet oxygen quenchers, reducing agents and hydrogen donors, which contribute to their scavenging properties [30]. Flavonoids are secondary metabolites having antioxidant activity, the effectiveness of which is determined by the amount and location of free OH groups. The DPPH test is a simple, acceptable, and extensively used method for determining a plant extract's radical scavenging potency [31]. Similarly, previous research has discovered a strong positive connection between antioxidant activity and total flavonoid and total phenolic concentrations in celery [32]. Plant phytochemical compounds have been reported to participate in a variety of biological activities, such as antibacterial, antifungal, antioxidant, anti-cancer and antiinflammatory activity [33]. Early phytochemical screening revealed that F. lepicarpa contained flavonoids, phenols, saponin, steroids, phytosterols, and triterpenoids. Literature has documented that all the present secondary metabolite compounds have potential health-promoting properties [34]. The methanol extract of F. lepicarpa leaves under GC-MS analysis revealed the presence of a diversity of bioactive compounds. The major compounds, which include 12-Oleanen-3yl acetate, (3.alpha.) and urs-12-en-24-oic acid, 3-oxo-, methyl ester, (+)-, are presumed to be pentacyclic triterpenoid isomers. Both compounds are also known as β-amyrin acetate and α-amyrin acetate, respectively [35]. Ficus racemosa, F. cordata, F. palmata, F. thumbergii, F. sur and F. sycomorus have all been found to contain α-amyrin acetate and β-amyrin acetate [36]. The existence of these compounds in F. lepicarpa indicates that it shares the biochemical profile of the genus. Previous reports show that these compounds possess anti-inflammatory properties, which are used by pharmaceuticals to treat wounds, ulcers, and joint, bone and liver infections [37,38]. The body weight of the rats was recorded to keep track of their health. The increase percentage of body weight inferred that all individuals were healthy. The low percentage increase in body weight in CCl 4 model groups and the group pre-treated with plant extracts when compared to control and plant control groups implies a decrease in food and water intake, which is likely to be the result of CCl 4 toxicity. CCl 4 has been shown in previous studies to limit nutritional consumption due to maldigestion or malabsorption caused by gastrointestinal disorders [39]. In a dose-dependent manner, F. lepicarpa promotes body weight recovery. Higher doses of plant extract show improved recovery. This backs up a recent study that found that giving walnut extract to rats in CCl 4 recovered their body weight loss [40]. Liver index in all groups did not differ. A previous study found that lipid and collagen build-up raises the liver index. However, due to acute CCl 4 exposure in rats, it may not increase to the point that it significantly influences the liver index [41]. The toxicity of CCl 4 was validated by measuring ALT and AST levels in the blood. The levels of enzymes increase dramatically after the final CCl 4 therapy. The bioactivation of the cytochrome P-450 system as a result of CCl 4 toxicity forms a toxic reactive trichloromethyl peroxyl radical (CCl* 3 ). This radical further attacks membrane lipids to initiate a chain reaction, resulting in the peroxidation of membrane lipids, leading to hepatocellular damage. The production of free radicals (trichloromethyl and peroxytrichloromethyl) due to CCl 4 results in the leakage of cytoplasmic ALT and AST enzymes into the circulatory system. The high enzymatic levels indicate that the liver structure has been severely damaged [42,43]. With the administration of various doses of F. lepicarpa extract, the levels of this enzyme marker were restored in a dose-related manner. The findings of this study are in agreement with previous research in which plant extract restored serum markers in the blood after being induced with CCl 4 [44]. The stability of the plasma membrane and hepatocyte repair could explain the recovery of these damages. The findings suggest that F. lepicarpa methanol extracts may protect the liver from CCl 4 -induced damage. Antioxidant enzymes have been proven in previous research to be the first line of defence against reactive oxygen species (ROS) and other free radicals [44][45][46][47][48]. Previous studies showed that antioxidants such as taurine, N-acetylcysteine (NAC) and α-tocopherol revealed the recovery of hepatocytes against induced toxicity by medicines such as triazole rizatriptan, thioridazine and citalopram [49][50][51]. The antioxidant activity of F. lepicarpa against CCl 4 -induced ROS in rats is investigated in this work. The levels of GSH, LPO, GPx, GR, GST, and QR in hepatic tissues were measured, which reflected the ROS production generated by CCl 4 in the rats liver. ROS generation is also associated with a decrease in the mitochondrial membrane potential followed by DNA fragmentation and an increase in the expression of pro-apoptotic and inflammatory markers. The administration of F. lepicarpa extracts in rats showed recovery in the levels of GSH, LPO, GPx, GR, GST, and QR in CCl 4 -induced toxicity which, in general, reflects the recovery of mitochondrial membrane permeability and, thus, decreases in the ROS formation. Reduced glutathione (GSH) is a low molecular weight non-enzymatic antioxidant that is normally present in all cell types. It serves as a first line of defence against free radicals and functions as a co-substrate for other antioxidants. Due to CCl 4 toxicity, free radicals are produced, causing oxidative stress, a decline in mitochondrial membrane potential, inflammatory cell death, and tissue damage as a result of membrane lipid disruption. GSH levels are often reduced upon elevation of oxidative stress. The decrease in GSH levels could be related to an increase in cell usage to scavenge free radicals formation produced by CCl 4 [52]. In the current investigation, CCl 4 -administered rats' liver GSH levels were considerably lower than in the control groups, which is consistent with earlier research [45,46]. The toxicity of CCl 4 is reduced by pretreatment of F. lepicarpa. The mechanism of F. lepicarpa liver protection against CCl 4 toxicity may include the restoration of GSH levels. The possible mechanism of the hepatoprotective role of F. lepicarpa might be due to the presence of bioactive compounds which neutralize reactive oxidants directly, enhance the endogenous antioxidant defence system, and increase the steady-state GSH and/or the synthesis rate of GSH to enhance the protection against oxidative stress and restore the mitochondrial membrane potential caused by CCl 4 's toxicity effect [52]. Lipid peroxidation is the cause of cell membrane damage and the genesis of liver injury caused by free radical offshoots of CCl 4 . Free radicals primarily attack the phospholipid bilayers of cellular and subcellular membranes. Increased levels of MDA (the end product of lipid peroxidation) indicate increased lipid peroxidation [53]. In this study, the MDA level increased considerably in the CCl 4 -treated model group. When animals were pretreated with a F. lepicarpa extract, the amount of MDA in their livers was reduced moderately in a dose-related manner. This finding is in good agreement with previous studies, where CCl 4 -induced treatment elevated the MDA level [44][45][46][47][48]. The reduction in MDA levels indicated that lipid peroxidation was being inhibited alongside an increase in antioxidative defence mechanisms to prevent free radical synthesis and development, which would cause oxidative damage [54]. In addition to non-enzymatic antioxidants (GSH and LPO), enzymatic antioxidants (GPx and GR) and phase II metabolizing enzymes (GST and QR) also play an important role in the defence mechanism against the free radicals. These antioxidant enzymes play a crucial role in the detoxification process and provide protection during the healing process by scavenging free radicals during oxidative stress. However, these antioxidant enzymes are susceptible to oxidation [55]. The activities of these enzymes were significantly reduced in the CCl 4 model treatment group when compared to the normal and control plant group. The findings in this research are in accordance with previously published research [44][45][46][47][48]. The addition of methanol extract of F. lepicarpa leaves to rats' intake increased enzyme activity, demonstrating the plant s antioxidant and hepatoprotective properties. Histopathological investigations are required to support biochemical findings [56]. Histopathological observation in the CCl 4 treatment model indicated that CCl 4 induced fibrosis, cirrhosis, and hepatocarcinoma. F. lepicarpa reduced severe liver injury caused by CCl 4 in a dose-dependent manner. The histological manifestations confirmed the biochemical findings and clearly suggested that F. lepicarpa methanol extract has a significant hepatoprotective effect against CCl 4 -induced oxidative stress. The histological manifestations in this study are consistent with the findings of other authors [44][45][46][47][48]. Chemicals Carbon tetrachloride (CCl 4 ) and other chemicals used in this study were obtained from Sigma Aldrich (St. Louis, MO, USA). Solvents of analytical grade or GC grade were from Fisher Scientific (Hampton, NH, USA). Alcohol, acid alcohol, blue buffer, eosin, haematoxylin, xylene and DPX mounting medium for histological assessment were purchased from Leica Biosystem (Wetzlar, Germany). Dako provided the immunohistochemistry antibodies and reagents (Glostrup, Denmark). Sample Collection and Preparation Fresh Ficus lepicarpa leaves (voucher number: SVS 001) were collected from Kg. Morion, Tandek, Kota Marudu, Sabah, Malaysia, in the month of November 2017. Species of the Ficus were authenticated by a botanist from the Institute for Tropical Biology and Conservation (IBTP), Universiti Malaysia Sabah. The oven-dried leaves (40 • C) were ground using a blender and macerated in methanol at the ratio of 1:10 (w/v) for 72 h at room temperature. The extract was filtered through Whatman filter paper No.1, and using a vacuum rotary evaporator, methanol residues were extracted from the extract. The samples were stored at −80 • C for 24 h before being lyophilized with a freeze drier. The freeze-dried samples were then kept in the freezer for subsequent examination [59]. Total Phenolic Content (TPC) The F. lepicarpa methanol leaf extract's total phenolic content was estimated spectrophotometrically at 725 nm according to the Folin-Ciocalteu method with slight modification [47,60]. The blank had the same constituents except that the extract was replaced by distilled water. All the analysis was repeated three times, and the mean value of absorbance was obtained. Gallic acid dilutions (0.1-0.5 mg/mL) were used as the standard for preparing the calibration curve. The total phenolic content was expressed as mg gallic acid equivalents (GAE) per gram of extract. Total Flavonoid Content (TFC) The flavonoid content of F. lepicarpa extract was estimated using the aluminium chloride (AlCl 3 ) colorimetric method at 510 nm with slight modification [48,61]. In this assay, catechin dilutions (0.01-0.1 mg/mL) were used as a standard to make the calibration curve. The total flavonoids in the extracts were calculated in triplicate, and the results were averaged. The total flavonoid content of the extract was expressed in mg of catechin equivalents (CAE) per gram. Radical Scavenging Assay (DPPH) The DPPH assay was used to estimate the free radical scavenging activity of F. lepicarpa leaf methanol extract [62]. DPPH reduction was calorimetrically measured at 517 nm against a blank after 1 h incubation in the dark with DPPH. The experiments were carried out in triplicate. The percentage inhibition was calculated using the following formula: Here, A control is the absorbance of the control (solution without extract or standard), and A sample is the absorbance in the presence of the extract or standard of various concentrations. The graph of the percentage of RSA versus extract concentration was used to calculate the 50% of extract inhibition concentration (IC 50 ). The slope of the linear regression was used to calculate the values. Gas Chromatography-Mass Spectrometry Analysis (GC-MS) GC-MS analysis was carried out using an Agilent 7890A gas chromatograph coupled with an Agilent 5975C mass spectrometer inert XL EI/Cl MSD with an ionization energy of 70 eV and a capillary column HP-5MS (30 m × 0.25 mm × 0.25 µm). The injection volume was adjusted to 1 µL in the splitless mode. The injector temperature was set at 250 • C using pure helium gas as a carrier at a constant flow rate of 1.0 mL/min. Separation of metabolites was performed at a temperature program from 100 • C, which was held for 3 min and then gradually increased from 100 • C to 180 • C with steps of 15 • C/min and from 180 • C to 300 • C with steps of 5 • C/min. It was then held at 300 • C for 10 min. Identification of the chemical compounds of the extract was based on gas chromatography retention time and the concentration of the compounds in the chromatogram, with a computer matching the mass spectra with those of standards from the National Institute of Standards and Technology (NIST) library. Along with a blank solvent, each analysis was performed in triplicate. Animal Experiments Adult Sprague-Dawley male rats weighing 200 to 250 g were obtained from Biotechnology Research Institute's Animal Breeding House located at the Animal Biosafety Lab (ABSL) facilities of Universiti Malaysia Sabah. During the study, the animals were raised in a laboratory animal house using dried corn bedding in plastic (polypropylene) cages with a controlled environment (25 ± 3 • C and 50% humidity) and free access to a continuous food and water supply. The protocols for animal use were in accordance with the guidelines of care and use of laboratory animals by the National Academy of Sciences [69] and approved by the Animal Ethics Committee of Universiti Malaysia Sabah (UMS/PPPI1.3.2/800-2/1/17 Jilid 4 [3]). The animals were acclimatized for a week in standard laboratory environmental conditions and randomly assigned to control and experimental groups. Thirty-six adult male rats were randomly assigned to six groups of six rats each and treated as follows: Group 1: Normal control (were not given any treatment). Group 2: CCl 4 (1.0 mL/kg bwt). Group 3: CCl 4 + F. lepicarpa (100 mg/kg bwt). Group 4: CCl 4 + F. lepicarpa (200 mg/kg bwt). Group 5: CCl 4 + F. lepicarpa (400 mg/kg bwt). Group 6: F. lepicarpa (400 mg/kg bwt) (plant control alone). The CCl 4 was given orally at a dose of 1.0 mL/kg bwt in corn oil (1:1). A distilled water extract suspension was prepared, and different doses of F. lepicarpa extract (100, 200, and 400 mg/kg b wt.) were administered orally to the animals via gastric gavage needle for 14 days, followed by two doses of CCl 4 on the 13 th and 14 th days. The selection of doses of plant extracts for the in vivo experiment and the duration of the treatment period were based on our own preliminary studies. All of these rats were decapitated within 2 h of the last CCl 4 dose. After cervical dislocation, blood was taken from the posterior vena cava before the heart stopped beating. The clotted blood was centrifuged for 30 min at 2000× g, and the serum was kept at −80 • C for serum transaminase (ALT and AST) assays. The extracted liver tissues were perfused using chilled 0.85% w/v NaCl to remove from extraneous materials and to obtain a homogenate or post-mitochondrial supernatant (PMS). A small portion of liver tissue was homogenized in cooled phosphate buffer (0.1 M, pH74, ), which was then frozen at −80 • C for subsequent biochemical analysis and antioxidant enzyme assays. To quantify the activity of the quinone oxidoreductase assay, a part of the homogenate was ultracentrifuged at 105,000× g for 1 h at 4 • C to yield cytosolic components. The residual liver tissues were fixed in 10% natural formalin buffer for histopathological and immunohistochemistry analysis. The following equation was used to calculate liver indices: the liver index is calculated as the sum of the liver wet weight (g) and the final body weight (g) multiplied by 100. Serum Transaminases (AST and ALT) The colorimetric method developed by Reitman and Frankel was used to measure serum alanine aminotransferase (ALT) and serum aspartate aminotransferase (AST) [22,70], which uses buffered enzyme substrate (L-alanine (ALT) or L-aspartate (AST) and αketoglutarate) dinitrophyenylhydrazine (DNPH) as a colouring reagent which reacts with pyruvate standard (2 mM) and forms a brown-coloured hydrazone complex in the alkaline conditions. The developed colour was directly proportional to enzyme activities and colour intensities, which were measured at 510 nm. Sodium pyruvates of different concentrations were used to plot a standard graph, from which the unknown serum transaminases were assessed. Reduced Glutathione (GSH) The reduced GSH content was determined by the previously established method [71]. The reaction mixture consisted of liver homogenates (10% w/v), phosphate buffer (0.1 M, pH 7.4), sulfosalicylic acid (4% w/v) and 5,5-dithiobis-2-nitrobenzoic acid (4 mg/mL) in a total volume of 3.0 mL in which a yellow colour formed. It was immediately read at 412 nm on a visible spectrophotometer. Results are given as micromoles of reduced glutathione per gram of tissue calculated with a molar extinction coefficient of 1.36 × 10 3 M −1 cm −1 . Lipid Peroxidation (TBARS Content) Lipid peroxidation was assessed by measuring thiobarbituric acid reactants (TBARs) productions following a previously described method [72,73]. The reaction mixture consisted of liver homogenates (10% w/v) and trichloroacetic acid (TCA) (10% w/v). The supernatant of the mixture after centrifuging with thiobarbituric acid (TBA) (0.67% w/v) in a total volume of 2.0 mL was read at 535 nm. The results were represented as nanomoles of MDA produced per gram of tissue computed using a molar extinction coefficient of 1.56 × 10 5 M -1 cm -1 . Glutathione Reductase (GR) The glutathione reductase (GR) enzyme transforms oxidised glutathione (GSSG) to reduced glutathione (GSH) and NADPH to NADP + at the same time. This was determined based on a previously described method [75,76]. The enzyme activity was delineated as nanomoles of NADPH oxidized per minute per milligram of protein by a molar extinction coefficient of 6.22 × 10 3 M −1 cm −1 . The reaction mixture reading was recorded at seven intervals every 30 s for 3 min at 340 nm. The mixture of a total volume of 2.45 mL consists of liver homogenates (10% w/v), GR working solution (phosphate buffer (0.1 M, pH 7.4) and EDTA (0.5 mM)), oxidized glutathione (1 mM) and NADPH (0.1 mM). Glutathione S-Transferase (GST) The demonstration of liver homogenate glutathione S-transferase (GST) initializes the reduced glutathione (GSH) to react with 1-chloro 2,4 dinitrobenzene (CDNB). An ultraviolet chromogenic substrate used to form a CDNB conjugate (CDNB-SG) was employed using a previously described method [77,78]. The changes in absorbance were recorded every 30 s for 3 min at a wavelength of 340 nm and calculated using a molar extinction coefficient of 9.6 × 10 3 M −1 cm −1 as nanomoles of CDNB conjugate formed per minute per miligram of protein. The 3.0 mL total volume reaction fusion consisted of homogenates (10% w/v), CDNB (1 mM), phosphate buffer (0.1 M, pH 7.4) and reduced glutathione (1 mM). Histopathological Assessment The liver tissue was fixed in 10% natural buffered formalin. It was then cut into approximately 50 mm thick sections, dehydrated with a series of increasing (70-100%) alcohol concentrations and three series of clearing agents using xylene solvent and perfused with three series of paraffin wax liquid before being embedded in solid paraffin blocks. The tissues blocks were cut into thin ribbon sections (4-5 µm) and stained with haematoxylin and eosin (H&E). A pathologist who was unaware of the sample assignment of experimental groups examined the liver tissue segments using a high-resolution light microscope with photographic services. Immunohistochemical Assessment For immunohistochemistry, the fixed liver tissues in 10% natural buffered formalin were processed according to the histological method of tissue processing until being embedded in solid paraffin blocks. The paraffin-embedded tissue was cut at 4 µm, mounted on positively charged slides and dried at 58 • C for 60 min. The slides were then deparaffinised and rehydrated through two series of xylene and a degraded series of alcohol (two series of 100%, one series each of 95%, 70% and 50%) before being rinsed with deionized water and rehydrated with a wash buffer. After microwave antigen retrieval using sodium citrate buffer, the slides were then incubated in 3% hydrogen peroxide to block endogenous peroxidase before being incubated with primary antibodies (rabbit polyclonal antibodies) of interleukin-6 (IL-6) (1:500), prostaglandin E 2 (PGE 2 ) (1:500) and tumour necrosis factoralpha (TNF-α) (1:500). Signal enhancement was performed using horseradish peroxidase (HRP), and antibody binding was detected by the 3, 3 -Diaminobenzidine (DAB) chromogenic substrate (brown colouration) based on the desired level of colour intensity. The slides were then counterstained with Harris haematoxylin, dehydrated, and mounted with xylene (DPx). Immunoreactivity was measured by a pathologist who was not informed of the sample experimental groups. The appearance of an intense brown colouration on liver sections demonstrated the presence of these markers on the cells. Statistical Analysis The findings of this investigation were expressed as mean ± standard error (mean ± SE). To determine the significance of the differences between the control and experimental groups, statistical analysis (SPSS statistical analysis software) was performed using oneway analysis of variance (ANOVA) for repeated measurements with the significance level set at p < 0.05. Conclusions In conclusion, the findings of this study suggest that significantly decreased levels of hepatic GSH and lipid peroxidation (MDA) along with normalizing activities of antioxidant enzymes and serum markers (ALT, AST) suggest that F. lepicarpa extract has protective effects against CCl 4 -induced hepatotoxicity by reducing the oxidative stress. A reduction in the histopathological alteration and proinflammatory cytokine expression in the liver further supports the biochemical findings. We conclude that F. lepicarpa extract could be used as a therapeutic agent to protect the liver from oxidative damage. Prior to considering F. lepicarpa as a therapeutic agent, the mechanism of its action, pharmokinetic investigations as well as bioavailabity of its bioactive constituents are essential and needed.
2022-04-21T15:20:26.514Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "f140333fc4fda58e089df62a863481a190f4d91b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/8/2593/pdf?version=1650276492", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fd98217751ba08695d824ffcd9c8a8ae33b541c", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235694705
pes2o/s2orc
v3-fos-license
Small scale effects in the observable power spectrum at large angular scales In this paper we show how effects from small scales can enter the angular-redshift power spectrum $C_\ell(z,z')$. In particular, we show that spectroscopic surveys with high redshift resolution are already affected on large angular scales, i.e. at low multipoles, by features from small scales. When considering the angular power spectrum with spectroscopic redshift resolution, it is therefore important to account for non-linearities relevant on small scales, even at low multipoles. This may also motivate the use of the correlation function in relatively wide redshift bins, which is not affected by non-linearities on large scales, instead of the angular power spectrum. The extent to which small-scale effects become visible on large scales, which is more relevant for bin auto-correlations than for cross-correlations, is quantified in detail. Introduction In cosmology, positions of galaxies are truly observed as redshifts and angles. If we want to determine the correlation properties of galaxies (or classes of galaxies) in a model-independent way, we must study the redshift-dependent angular power spectra, C (z, z ), or the angular correlation functions, ξ(θ, z, z ). Assuming statistical isotropy, these 2-point functions fully determine the 2point statistics of the galaxy distribution and, with the additional assumption of Gaussianity, they determine all statistical quantities. Whenever redshifts and angles are converted into length scales, assumptions about the background cosmology are made. At very low redshift, z 1, the distance is entirely determined by the Hubble parameter, r(z) z/H 0 and the model dependence is encoded in the 'cosmological unit of length' given by h −1 Mpc, where h = H 0 /100km/sec/Mpc. However, at redshifts of order unity and larger, the full cosmological model enters in the determination of r(z). This fact has prompted a tendancy in the field to prefer the directly-observable angular-redshift power spectra and correlation function. We cite Refs. [1][2][3][4][5][6][7] as examples. In this paper we study the following question: When analyzing a spectroscopic dataset, in which redshifts are very well known, that is σ z 10 −3 , does the precise radial information about the galaxy position affect the power spectrum at low multipoles of 100? We shall see that the answer to this question is yes, as already noted in [8]. Here we perform a detailed study of the amplitude of the effect and its origin. We find that small scale effects, including changes in the small-scale power due to non-linearities especially, significantly affect all spectroscopic C (z, z)'s including, most interestingly, at low . This finding is not so surprising and is actually just another way to understand why the Limber approximation [9,10], which selects a single scale k relevant for a given , totally fails for spectroscopic number count C (z, z)'s, see e.g. [11,12]. We study how the contributions of these effects on large scales decay if we either smear out the C (z, z)'s over a sufficiently large redshift window or consider cross-correlations, C (z, z ) with sufficiently large |z − z |. Contrary to correlations of galaxy number counts alone, lensing-lensing or lensing-number counts cross-correlations (galaxy-galaxy-lensing) are insensitive to the appearance of small scale effects at low multipoles, due to the broad kernel of the shear and magnification integrals. This paper is organized as follows: in the next section we study the origin of the imprint of small scale contributions on the C (z, z)'s from galaxy number counts at low , in general. In Section 3 we investigate the effect induced at low 's by a change in shape in the high k power spectrum, due to non-linearities for example, as a function of redshift as well as window width. Our aim here is an estimate of the amount by which the C 's are affected, but not a detailed non-linear modelling of them. The latter could be achieved by high resolution N-body simulations and goes beyond the scope of this work. In Section 4 we investigate cross-correlations and in Section 5 we end with our conclusions. 2 Small scale contributions to low spectroscopic C (z, z)'s : Generics If we consider correlations of galaxies at fixed redshift with small redshift uncertainty σ z , their comoving radial separation is smaller than , corresponding to a radial wave number k (z, Typical spectroscopic surveys have redshift resolution of σ z ∼ 10 −3 or better. As an example, r(1, 10 −3 ) = 1.7h −1 Mpc, which is well within the non-linear regime at z = 1. To find a quantitative estimate, we consider the flat sky approximation for C (z, z ), which is excellent for very close redshifts [12]. wherez = (z + z )/2 is the mean redshift, k = k 2 + 2 /r 2 (z) and P g = b 2 (z)P m denotes the galaxy power spectrum. P m is the matter power spectrum and b(z) is the galaxy bias (we neglect non-linear bias). Integrating this over z and z , with a tophat window of width σ z centered atz, we obtain . Strictly speaking, the above expression is for the density term only. In order to include redshift space distortions (RSD), we have to replace the power spectrum by where µ = k /k is the direction cosine of k with the forward direction and f (z) is the growth rate, see e.g. [13]. This is the power spectrum from linear perturbation theory which we use to determine the so-called 'standard terms' which comprise density and RSD correlations and which are used in the present analysis. We have compared approximation (2.3) (replacing P g by P D+RSD ) with the class-code [14] and found that the difference is smaller than 1% for windows of size σ z < 0.01. For larger windows the error slowly grows and reaches about 4% for σ z = 0.1 and = 100. The spherical Bessel function, j 0 , acts as a 'low-pass filter', meaning that only modes with k ≤ 2πH(z)/σ z =: k (z, σ z ) significantly contribute to the angular power spectrum. The smaller σ z , the higher the values of k which contribute. In Fig. 1 we plot k (z, σ z ) as a function of redshift for some values of σ z . This also explains why the Limber approximation fails for narrow redshift bins. The Limber approximation, which replaces k = k 2 + 2 /r(z) 2 by ( +1/2)/r(z), is a reasonable approximation only for values of with k (z, σ z ) /r(z). Inserting k (z, σ z ), we find that this requires > (z, σ z ), with (z, σ z ) = r(z)k (z, σ z ) = 2πH(z)r(z)/σ z = 2π For z ∼ 1 and σ z 10 −3 , this means (z, σ z ) > 6000. Thus, the Limber approximation is completely out of the question at the values of that we consider here and, as we shall later see, non-linearities enter the spectrum C (z) for all values of . While those at equal redshift considered here are calculated using the flat sky approximation, our later numerical results at unequal redshift are obtained with the code 'CAMB sources' [2] which allows for a simpler adjustment of the k max in the integration of the spectra than class. For better stability in the case of unequal redshifts, we use Gaussian windows with full width at half maximum given by σ z throughout. Of course, the extent to which the contributions from high k are relevant also depends on the shape of the power spectrum. For example, we expect them to be more relevant for the non-linear power spectrum which is boosted, especially at small scales. In this section we calculate the linear angular power spectrum, as well as the non-linear halofit [15] spectrum as an example of a different spectrum shape, taking cut-offs at different values k max in the integral over k. In particular, we consider the spectroscopic case of a narrow window width σ z that modifies the relevant value of k , see Fig 1. In this way we can investigate how sensitive the resulting low portion of the spectrum is to high k contributions, and whether the presence of extra power at high k in the case of the halofit spectrum makes a significant difference. We are not concerned here with the precise shape of the true non-linear power spectrum, but rather whether a sizable change in power at small scales makes a significant change to the amplitude of the angular power spectrum, even at low . As an example, in Fig. 2 we plot C (z = 1, σ z = 10 −3 ), integrating up to different values of k max . The power missing for small k max at low is as significant as at high . Hence the value As k max increases, the spectra converge to their accurate values at each . For the lowest values of k max , the spectrum at higher 's does not obtain any contributions from the range of k considered, and drops to zero. The difference between the linear spectrum and halofit increases with k max . The spectra for k max = 1/Mpc and k max = 3/Mpc are very close, which indicates that the spectrum is converging at k max 3/Mpc. . We show C 20 (z = 1, σ z = 10 −3 ) (blue) and C 100 (z = 1, σ z = 10 −3 ) (orange) from the linear (solid lines) and halofit (dashed lines) power spectra as a functions of k max , the upper boundary of the kintegral. The values of k (z, σ z ) and k Lim ( , z) are also indicated. From this plot we see that k max k (z, σ z ) is required for the halofit spectra to converge. of is not simply related to the value of k which gives the dominant contribution to the angular power spectrum. For the lowest k max there is no difference between the linear power spectrum and halofit, but for k max 0.1 the halofit power spectrum boosts all C 's. However, while for k max = 1 and k max = 3 the linear power spectra overlap, the non-linear spectra still differ by about 2%. This shows that the effect of the non-linearities on the shape also affects the angular power spectrum at low . In Fig. 3 we plot C 20 (blue) and C 100 (orange) forz = 1, σ z = 10 −3 as functions of k max . As vertical lines we also indicate k (z, σ z ) (black) and k Lim (z, ) = ( + 1/2)/r(z) (dotted, blue and orange respectively). The value k is roughly where convergence of the spectra is reached 1 . It is evident that non-linearities have a significant effect, indicated by the dashed lines. The low spectroscopic C (z, σ z )'s : Effects of non-linearities In the previous section we have seen that, even at low , spectroscopic C (z, σ z )'s are affected by the power spectrum at high k's. This implies that they are significantly affected by the non-linearities that exist at high k, even if k Lim k NL . Let us introduce the non-linearity scale, as it is often defined, by [16,17] σ Here σ 2 (R, z) is the usual variance of the mass fluctuation in a sphere of radius R, where P m is the matter power spectrum. The amplitude of the matter power spectrum is often parametrized by σ(R = 8Mpc/h, z = 0) = σ 8 . We associate it with the corresponding non-linearity wave-number, To know the precise change of the angular power spectrum at low 's due to non-linearities, one would need to know both the matter and velocity power spectrum precisely at very small (radial) scales and at very large (transverse) scales. In principle, this requires very large, high resolution simulations, including hydrodynamical effects, which goes beyond the scope of the present paper. Here we aim simply to estimate the amount of the modification through answering the following question: By approximately how much are non-linear effects on small scales felt at low 's in the resulting angular power spectrum, when considering the narrow window widths associated with spectroscopic surveys? In order to take into account the uncertainty of modelling, we consider the effects on the spectra of the following three different non-linear models: halofit [15], HMcode [18] and the TNS model [19]. For a non-linear model of the galaxy power spectrum P gg , the velocity divergence power spectrum, P θθ and their correlation, P gθ , the standard density and RSD power spectrum is given by P D+RSD (k, µ, z) = P gg + 2µ 2 /H(z)P gθ (k, µ, z) + (µ 2 /H(z)) 2 P θθ (k, µ, z) . For µ = 0 the non-linear models are very similar, gaining roughly the same amount of power for higher values of k. However, for µ = 1, the TNS model differs strikingly, deviating already at larger scales, and losing power with respect to the linear spectrum. In red we plot the value of k NL for z = 1. In linear perturbation theory For the remainder of this paper, any references to P (k, µ, z) are taken to mean the linear power spectrum with only standard terms included, i.e. P D+RSD (k, µ, z), unless otherwise stated. In Fig. 4 we compare the linear power spectrum (black) with various non-linear models, namely: halofit (blue), HMcode (orange) and the TNS model (green). We consider a fixed redshift (z = 1.0), and two values of µ: µ = 0, for which RSD are absent, is shown as dashed lines, and µ = 1, for which RSD are maximal, is shown as solid lines. The components of the linear power spectrum correspond, as we have stated, to the standard terms which we consider in this paper. Within linear perturbation theory they are given by (2.4). For µ = 0 we are effectively considering only the contribution from density auto-correlations. In this case all three of the non-linear models are very similar, gaining roughly equal amounts in amplitude. However, for µ = 1 there is a significant difference between the TNS model and the two others. While halofit and HMcode agree so well that their spectra overlie, the TNS model has much less power on small scales for µ = 1, even below the linear spectrum. This comes from the more realistic treatment of the 'fingers-ofgod' effect in TNS, which reduces significantly the power of the velocity spectrum in the radial direction. This is relatively well modeled in the TNS approach, but neglected in both halofit and HMcode, for which we approximate the density + RSD spectrum by (2.4), simply replacing the linear matter power spectrum P m by the corresponding non-linear model. A comparison between these and several other non-linear approximations and numerical simulations in [8] has shown that TNS best models redshift space distortions, while halofit (or HMcode) best models the non-linear matter power spectrum. In Fig. 5 we compare the linear C spectrum (black) with the non-linear halofit (blue), HMcode (orange) and the TNS model (green) spectra forz = 1.0 at fixed spectroscopic bin width σ z = 10 −3 , integrated to a well-converged value of k max = 5. The differences in power introduced by the narrow window considered are different depending on the non-linear model chosen, since the effects on the power spectrum P (k, µ, z) at high k are different. However, in all cases, a constant offset remains even at low , indicating that the differences have been propagated across all scales by the narrow window. This is better visible in Fig. 6, where we show the relative differences, in %. At z = 1, spectroscopic power spectra obtained by halofit and HMcode are about 7% larger than the linear perturbation theory result, while the TNS spectrum is about 10% lower for low 's down to = 2. For halofit and HMcode spectra, at 100, where non-linearities are expected to set in, the difference rapidly grows. The relative difference of the TNS spectrum first becomes smaller for 200 and overtakes the linear spectrum only at 900, the coincidental point, where the TNS spectrum crosses the linear spectrum. After this point it grows just as rapidly as the other non-linear spectra. In all cases considered, the effects of non-linearities in P (k, µ, z) (see Fig. 4) at small scales (large k), lead to differences of at least 7% in the C spectrum at large scales, by virtue of the narrow window investigated. Even though the amplitude and even the sign of the difference depends on the treatment of non-linearities, we always obtain a constant offset in the 7 to 10% range for 100 . For the rest of this analysis, we will use the halofit spectrum as an example to investigate the effect that a boosted amplitude at high k has on the angular power spectrum at low . The reason for this, on the one hand, is the simplicity of the model and its good agreement with the more recent HMcode, but also its wide use in the literature. Furthermore, for halofit, non-linearities Halofit HMcode TNS Figure 6. We show the relative differences between the linear (C L ) and the non-linear (C NL ) spectra plotted in Fig. 5, in percent. Most importantly, we see that the difference to the linear spectrum at low is a constant offset in the range of ∼ 7 − 10% at z = 1. set in at a somewhat higher value of k than for the TNS model. Therefore, if the angular power spectrum for a low value of is well approximated for a given k max in the halofit model, this is also true for the more realistic TNS model. In Fig. 7 we plot the k max required to reach a precision of 1% (upper panel) and 5% (lower panel) for C 20 (blue) and C 100 (orange) as a function of σ z . The non-linearity scale is also indicated (red line). At σ z = 10 −3 , k max 10k NL for the halofit spectra (dashed), even at = 20, for 1% accuracy and somewhat smaller for 5%. Up to σ z 10 −2 , spectra are affected by more than 5% by non-linear scales. Since HMcode is very similar to halofit, and for TNS the non-linearities enter at smaller k, this is expected to be a safe value for k max for these models too. The naive estimate of k max ∼ k (z, σ z ) overestimates the numerically calculated k max , especially for smaller windows σ z , where the figure shows that the contributions from k's saturate and higher k contributions are not necessary to reach the level of precision considered. In Fig. 8 we show the 1% accuracy values (upper panel) and 5% accuracy values (lower panel) of k max for = 20 (blue) and = 100 (orange) for the halofit (dashed) and linear (solid) power spectra at fixed σ z = 10 −3 , typical of a spectroscopic survey, as functions of redshift. We also indicate k NL (z) (red). The required k max (z) is larger than k NL (z) throughout. Non-linearities still affect the spectra up to z = 5 by more than 5%. Note also that k max depends only weakly on redshift for the linear power spectrum. Our naive estimates k max ∼ k (z, σ z ) = 2πH(z)/σ z 2.5(H(z)/H(1))/Mpc (which again overestimates the convergence scale in all cases) would be rising with redshift. The near constancy of k max is caused by the fact that the matter power spectrum decreases for k > k eq ∼ 0.01/Mpc. The rise in k (z, σ z ) and the decrease in the matter power spectrum seem to nearly balance each other in the linear power spectrum. However, the redshift dependence of k max becomes quite pronounced for the non-linear (halofit) spectrum. It is therefore purely a consequence of the shape of the power spectrum, which decreases much less steeply when considering the non-linear case. The value of k max (z) for the halofit power spectrum first rises until about z = 2, (a result of the competing effects, with increasing redshift, of weakening non-linearities and the decreasing scale of modes probed by a fixed window width), and then decreases towards higher redshifts, slowly approaching the value for the linear power spectrum, since non-linearities 4 Cross-correlations, C (z 1 , z 2 ) for z 1 = z 2 In this section we consider cross-correlations of the linear C (z 1 , z 2 ), with z 1 = z 2 . It is well known that for widely-separated redshifts, cross-correlations are dominated by the lensing-density (at low redshifts) or the lensing-lensing (at high redshift) terms [20]. Furthermore, these terms are well approximated with the Limber approximation, to which only the spectrum at k Lim = ( +1/2)/r(z) contributes [12]. Here we study just the standard terms, since only these may be affected by small scale contributions. The relation between the standard terms and the lensing terms depends on both galaxy bias b(z) and magnification bias s(z), which are survey dependent, see e.g. [4]. Therefore, we do not discuss their relative amplitude in this general study. We consider only redshift pairs (z 1 , z 2 ) which are not too widely separated, so as to avoid cases where the contribution of the standard terms is completely negligible. In the Limber approximation, which is well known to fail for the standard terms in narrow redshift bins [12,21], these terms simply vanish if the bins are not overlapping. To obtain some intuition 2 , we consider again the flat sky approximation and integrate over tophat windows of width σ z centered at z 1 and z 2 . Settingz = (z 1 + z 2 )/2, a short calculation gives In this approximation we have used that H(z) and P (k, z) change slowly with redshift, while the exponential changes rapidly. In addition to the low-pass filter k (z, σ z ) = 2πH(z)/σ z , we now have a new scale given by the wave number k × (z 1 , z 2 ), after which the exponential has performed about one oscillation, We assume that the two redshift bins are not overlapping, i.e. |z 2 − z 2 | > σ z . As a first guess one might hope that contributions from values of k > k × are averaged out by oscillations and can be neglected. The situation is actually quite interesting. To analyze it, let us first assume P (k, z) to be independent of k , a reasonable approximation for large , where k /r(z). In this case we can integrate (4.1) exactly. Defining we obtain which vanishes entirely for well separated windows, σ z < |z 1 −z 2 |. This corresponds to our findings for large > 100, see Fig. 9, where a window size σ z = 10 −3 is chosen. For smaller the situation is more complicated and the result depends entirely on the behavior of P (k, z). Our numerical examples show that, depending on the redshift pair considered, the value k max required to achieve an accuracy of 10% for the cross-spectrum of the standard terms can be much smaller than k × or up to more than 8 times higher. The contribution from the standard terms in the cross-correlations are very small and tend to zero for 100, see Fig. 9. Thus, measuring these terms with observations will be quite challenging. For this reason, we only investigate the k max needed for a 10% accuracy in this case. Also, as the signal is very close to zero at = 100, we concentrate on the case = 20 which is also more relevant here, as it is typically expected to converge for smaller values of k max . The k max needed to achieve 10% accuracy in the linear cross-correlation spectrum is about 8k × for (z 1 , z 2 ) = (2, 2.3) and = 20; however, it remains smaller than k NL . Thus we deduce that, even though a narrower window function still increases the k max required for good convergence, it is not very significant and non-linearities are much less important at low in the case of cross-correlations, see Fig. 10. For (z 1 , z 2 ) = (1, 1.1), k max is much smaller than k × , actually roughly given by k Lim , which would be in agreement with our naive expectation. It seems then that the scale k × does not characterise the scale of important contributions well, if at all. While the reason for this is not entirely clear to the authors, one point is simply that the cross-correlation signal of density and redshift space distortion for (z 1 , z 2 ) = (2, 2.3) is about 10 times smaller than the one for (z 1 , z 2 ) = (1, 1.1) for = 20, making an accuracy of 10% even more difficult to attain. We also found that, when considering higher redshifts, e.g.z ∼ 3, it is possible for k max to decrease again to less than k × and, when considering a smaller separation (z 1 , z 2 ) = (2, 2.2), where the standard terms are larger, we find a very small k max , once more in agreement with our naive expectation. However, the accuracy of the standard terms is in any case not so relevant for cross-correlation spectra, something which should be borne in mind when interpreting the values of k max in this section. First of all, as already mentioned, the lensing signal cannot be neglected in these spectra and including it significantly enhances the total signal, increasing the accuracy in calculating the total signal of the Limber approximation, which requires a lower k max . In Fig. 11 we show as an Figure 11. We show the fraction of the full angular power spectrum made up for by the density and redshift space distortion terms, in the case of an SKAII-like spectroscopic galaxy survey (black). Since the total signal vanishes at ∼ 100, we also show C D,RSD /(|C D,RSD | + |C L×D+LL |) (blue). example the ratio of the standard terms, C D,RSD , to the total spectrum including also the lensing terms, using values for b(z) and s(z) for SKAII as given in [22], Appendix A4. We use the same redshift pairs as for the previous figures (black lines). For (z 1 , z 2 ) = (1, 1.1), at low , the ratio is slightly larger than 1, since the lensing contribution, dominated by the density-lensing crosscorrelation, is negative. The divergence at 100 is the consequence of a zero-crossing of the denominator, the total spectrum. To remove it we also plot C D,RSD /(|C D,RSD | + |C L×D+LL |) (blue lines). At > 120, the total signal is positive, the lensing term now dominates and the standard terms contribute less than about 20%. For (z 1 , z 2 ) = (2, 2.3) the lensing term is always positive and dominates. The standard terms contribute only about 13% already for 20, so that a 10% error in the standard terms leads to a 1.3% error in the total result. Assuming a numerical accuracy of about 1% for these cross-correlations, it is not clear whether the high value of k max found in this case should not be (at least partially) attributed to numerical error in the CAMB-code with which the spectra have been computed. For (z 1 , z 2 ) = (2, 2.2) the standard terms still contribute nearly 50% at = 20 and the required k max is again very small, while for (z 1 , z 2 ) = (2, 2.5) they are below 5%, while the required k max is even larger than that for (z 1 , z 2 ) = (2, 2.3). But as mentioned above, these percentages depend on b(z) and s(z). The second reason why the accuracy of the standard terms is not of utmost importance, is that cross-correlations cannot be measured as accurately as auto-correlations, due to cosmic variance. The cosmic variance of cross-correlations is dominated by the much larger auto-correlations, For example at z 1, ∆z ≥ 0.1 and ≤ 50 auto-correlations are of the order of 10 −4 while cross-correlations are about 10 −7 . Hence, for a given redshift combination and ∼ 20, we expect a signal-to-noise ratio from the standard term alone of less than 0.1. To overcome this large cosmic variance, one will have to consider significant binning in -and z-space. Conclusion In this paper we have studied the angular power spectra for galaxy number counts, C (z, z ). In particular, we have investigated their dependence on the widths of the considered redshift window, σ z . We have found that in slim windows, relevant for spectroscopic surveys, i.e. σ z 10 −3 , the spectra are strongly affected by high values of k, even for very low multipoles. This means that for spectroscopic redshift bins, non-linearities affect the angular power spectra all the way down to = 2. More precisely, at low they produce a constant offset in the 10% range. This is especially true in the case of equal redshifts, z = z , and much less so for cross-correlations of different redshifts that are non-overlapping, but still close enough so that the standard density+RSD terms in the number counts are nevertheless considerable. Even though the exact effect of non-linearities is not really established with this work, which does not perform N-body simulations, differences of up to 17% are observed, depending on the non-linear model (e.g. between HMcode and TNG). Spectroscopic surveys are very important to measure the growth rate of perturbations, one of the most interesting variables to discriminate between different dark energy models [22][23][24]. Our finding is an additional motivation to use, instead of the angular power spectrum, the angular correlation function for larger redshift bins and the angular power spectrum only for crosscorrelations of these bins. The angular correlation function is not affected by non-linearities for sufficiently large separations, in the analysis of spectroscopic surveys. This has been advocated already in the past [25,26]. In Ref. [26] a public code for the fully-relativistic correlation function is presented and a significant speedup of this code has been achieved recently [27]. Another argument for why the angular power spectra are not suited to spectroscopic redshift resolution is the fact that even in very big surveys we would at best have a few thousand galaxies per redshift bin with σ z 10 −3 which leads to very substantial shot noise. For cross-correlations, the value of k max needed to achieve a precision of 10% for the density and RSD contribution to the C (z, z ) is not so high due to the following reasons: 1) The crosscorrelation signal has an additional oscillating function with a phase proportional to the redshift difference in its integral, which reduces the contributions from high k . 2) The lensing terms of cross-correlations, that are well approximated by the Limber approximation, are much more significant than their contribution to auto-correlations and so the k max needed for a 10% precision of the total, measured power spectra is significantly smaller. 3) Finally, the cross-correlation signal is much smaller than the auto-correlation signal, while its cosmic variance is similar. Therefore, we expect to achieve significantly lower precision in the measurements of cross-correlations. For these reasons, we have determined the value k max required to achieve 10% precision for the contribution of the standard terms to cross-correlations. At large redshift differences, cross-correlations are dominated by the lensing term and one can safely apply the Limber approximation to compute them, see [12] for a detailed study. We therefore consider our results very important for the auto-correlations which will be measured with a high signal-to-noise ratio in upcoming spectroscopic surveys like Euclid or SKA, but not so relevant for cross-correlations of non-overlapping redshift bins.
2021-07-02T01:15:40.517Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "b9eace6364ea468698a08707b350361a1776eb07", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2107.00467", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b9eace6364ea468698a08707b350361a1776eb07", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13545493
pes2o/s2orc
v3-fos-license
Spin Sum Rules and Polarizabilities: Results from Jefferson Lab The nucleon spin structure has been an active, exciting and intriguing subject of interest for the last three decades. Recent experimental data on nucleon spin structure at low to intermediate momentum transfers provide new information in the confinement regime and the transition region from the confinement regime to the asymptotic freedom regime. New insight is gained by exploring moments of spin structure functions and their corresponding sum rules (i.e. the generalized Gerasimov-Drell-Hearn, Burkhardt-Cottingham and Bjorken). The Burkhardt-Cottingham sum rule is verified to good accuracy. The spin structure moments data are compared with Chiral Perturbation Theory calculations at low momentum transfers. It is found that chiral perturbation calculations agree reasonably well with the first moment of the spin structure function $g_1$ at momentum transfer of 0.05 to 0.1 GeV$^2$ but fail to reproduce the neutron data in the case of the generalized polarizability $\delta_{LT}$ (the $\delta_{LT}$ puzzle). New data have been taken on the neutron ($^3$He), the proton and the deuteron at very low $Q^2$ down to 0.02 GeV$^2$. They will provide benchmark tests of Chiral dynamics in the kinematic region where the Chiral Perturbation theory is expected to work. Introduction In the last twenty-five years the study of the spin structure of the nucleon led to a very productive experimental and theoretical activity with exciting results and new challenges. 1 This investigation has included a variety of aspects, such as testing QCD in its perturbative regime via spin sum rules (like the Bjorken sum rule 2 ) and understanding how the spin of the nucleon is built from the intrinsic degrees of freedom of the theory, quarks and gluons. Recently, results from a new generation of experiments performed at Jefferson Lab seeking to probe the theory in its non-perturbative and transition regimes have reached a mature state. The low momentum-transfer results offer insight in a region known for the collective behavior of the nucleon constituents and their interactions. Furthermore, distinct features seen in the nucleon response to the electromagnetic probe, depending on the resolution of the probe, point clearly to different regimes of description, i.e. a scaling regime where quark-gluon correlations are suppressed versus a coherent regime where long-range interactions give rise to the static properties of the nucleon. In this talk we describe an investigation 4-9 of the spin structure of the nucleon through the measurement of the helicity-dependent photoabsorption cross sections or asymmetries using virtual photons across a wide resolution spectrum. These observables are used to extract the spin structure functions g 1 and g 2 and evaluate their moments. These moments are powerful tools to test QCD sum rules and Chiral Perturbation Theory calculations. Sum rules and Moments Sum rules involving the spin structure of the nucleon offer an important opportunity to study QCD. In recent years the Bjorken sum rule at large Q 2 (4-momentum transfer squared) and the Gerasimov, Drell and Hearn (GDH) sum rule 10 at Q 2 = 0 have attracted large experimental and theoretical efforts 3 that have provided us with rich information. Another type of sum rules, such as the generalized GDH sum rule 11 or the polarizability sum rules, 12 relate the moments of the spin structure functions to real or virtual Compton amplitudes, which can be calculated theoretically. These sum rules are based on "unsubtracted" dispersion relations and the optical theorem. Considering the forward spin-flip doubly-virtual Compton scattering (VVCS) amplitude g T T and assuming it has an appropriate convergence behavior at high energy, an unsubtracted dispersion relation leads to the following equation for 9,12 g T T : where g pole T T is the nucleon pole (elastic) contribution, P denotes the principal value integral and K is the virtual photon flux factor. The lower limit of the integration ν 0 is the pion-production threshold on the nucleon. A low-energy expansion gives: Combining Eqs. (1) and (2), the O(ν) term yields a sum rule for the generalized GDH integral: 3,11 The low-energy theorem relates I(0) to the anomalous magnetic moment of the nucleon, κ, and Eq. (3) becomes the original GDH sum rule: 10 where 2σ T T ≡ σ 1/2 − σ 3/2 . The O(ν 3 ) term yields a sum rule for the generalized forward spin polarizability: 12 Considering the longitudinal-transverse interference amplitude g LT , the O(ν 2 ) term leads to the generalized longitudinal-transverse polarizability: 12 Alternatively, we can consider the covariant spin-dependent VVCS amplitudes S 1 and S 2 , which are related to the spin-flip amplitudes g T T and g LT . The unsubtracted dispersion relations for S 2 and νS 2 lead to a "superconvergence relation" that is valid for any value of Q 2 , which is the Burkhardt-Cottingham (BC) sum rule. 13 At high Q 2 , the OPE 14 for the VVCS amplitude leads to the twist expansion. The leading-twist (twist-2) component can be decomposed into flavor triplet (g A ), octet (a 8 ) and singlet (∆Σ) axial charges. The difference between the proton and the neutron gives the flavor non-singlet term: which becomes the Bjorken sum rule at the Q 2 → ∞ limit. The leading-twist part provides information on the polarized parton distributions. The higher-twist parts are related to quark-gluon interations or correlations. Of particular interest is the twist-3 component, d 2 , which is related to the second moment of the twist-3 part of g 1 and g 2 : where g W W 2 is the twist-2 part of g 2 as derived by Wandzura and Wilczek 15 d 2 is related to the color electric and magnetic polarizabilities, which describe the response of the collective color electric and magnetic fields to the spin of the nucleon. 14 Description of the JLab experiments The inclusive experiments described here took place in JLab Halls A 18 and B. 19 The accelerator produces a polarized electron beam of energy up to 6 GeV. A polarized high-pressure (∼12 atm.) gaseous 3 He target was used as an effective polarized neutron target in the experiments performed in Hall A. The average target polarization, monitored by NMR and EPR techniques, was 0.4±0.014 and its direction could be oriented longitudinal or transverse to the beam direction. The measurement of cross sections in the two orthogonal directions allowed a direct extraction of g 3He 1 and g 3He 2 , or equivalently σ T T and σ LT . The scattered electrons were detected by two High Resolution Spectrometers (HRS) with the associated detector package. The high luminosity of 10 36 cm −2 s −1 allowed for statistically accurate data. The spin structure functions g n 1 and g n 2 are extracted using polarized cross-section differences. Electromagnetic radiative corrections were performed. Nuclear corrections are applied via a PWIA-based model. 20 To form the moments, the integrands (e.g. σ T T or g 1 ) were determined from the measured points by interpolation. To complete the moments for the unmeasured high-energy region, the Bianchi and Thomas parameterization 22 was used for 4 < W 2 < 1000 GeV 2 and a Regge-type parameterization was used for W 2 > 1000 GeV 2 . Polarized solid 15 NH 3 and 15 ND 3 targets using dynamic nuclear polarization were used in Hall B. The CEBAF Large Acceptance Spectrometer (CLAS) in Hall B, which has a large angular (2.5π sr) and momentum acceptance, was used to detect scattered electrons. The spin structure functions were extracted using asymmetry measurements together with the world unpolarized structure function fits. 21 Radiative corrections were applied. Since the GDH sum rule at Q 2 = 0 predicts a large negative value, a drastic turn around should happen at Q 2 lower than 0.1 GeV 2 . A simple model using MAID 3 plus quasielastic contributions indeed shows the expected turn around. The data at low Q 2 should be a good test ground for few-body Chiral Pertubation Theory Calculations. The neutron results indicate a smooth variation of I(Q 2 ) to increasingly negative values as Q 2 varies from 0.9 GeV 2 towards zero. The data are more negative than the MAID model calculation. 3 Since the calculation only includes contributions to I(Q 2 ) for W ≤ 2 GeV, it should be compared with the open circles. The GDH sum rule prediction, I(0) = −232.8 µb, is indicated in Fig. 1, along with extensions to Q 2 > 0 using two nextto-leading order χPT calculations, one using the Heavy Baryon approximation (HBχPT) 23 (dotted line) and the other Relativistic Baryon χPT (RBχPT) 24 (dot-dashed line). Shown with a grey band is RBχPT including resonance effects, 24 which have an associated large uncertainty due to the resonance parameters used. The capability of transverse polarization of the Hall A 3 He target allows precise measuremetns of g 2 . The integral of Γ for the neutron). The MAID estimate agrees with the general trend but slightly lower than the resonance data. The two bands correspond to the experimental systematic errors and the estimate of the systematic error for the low-x extrapolation. The total results are consistent with the BC sum rule. The SLAC E155x collaboration 17 previously reported a neutron result at high Q 2 (open square), which is consistent with zero but with a rather large error bar. On the other hand, the SLAC proton result was reported to deviate from the BC sum rule by 3 standard deviations. First moments of g 1 and the Bjorken sum The preliminary results from Hall B EG1b 8 experiment onΓ 1 (Q 2 ) at low to moderate Q 2 are shown together with published results from Hall A 4 and Hall B eg1a 5,6 in Fig. 2 along with the data from SLAC 17 and HERMES. 16 The new results are in good agreement with the published data. The inner uncertainty indicates the statistical uncertainty while the outer one is the quadratic sum of the statistical and systematic uncertainties. At Q 2 =0, the GDH sum rule predicts the slopes of moments (dotted lines). The deviation from the slopes at low Q 2 can be calculated with χPT. We show results of calculations by Ji et al. 23 using HBχPT and by Bernard et al. with and without 24 the inclusion of vector mesons and ∆ degrees of freedom. The calculations are in reasonable agreements with the data at the lowest Q 2 settings of 0.05 -0.1 GeV 2 . At moderate and large Q 2 data are compared with two model calculations 25,26 . Both models agree well with the data. The leading-twist pQCD evolution is shown by the grey band. It tracks the data down to surprisingly low Q 2 , which indicates an overall suppression of higher-twist effects. Spin Polarizabilities: γ 0 , δ LT and d 2 for the neutron The generalized spin polarizabilities provide benchmark tests of χPT calculations at low Q 2 . Since the generalized polarizabilities have an extra 1/ν 2 weighting compared to the first moments (GDH sum or I LT ), these integrals have less contributions from the large-ν region and converge much faster, which minimizes the uncertainty due to the unmeasured region at large ν. At low Q 2 , the generalized polarizabilities have been evaluated with next-to-leading order χPT calculations. 24,32 One issue in the χPT calculations is how to properly include the nucleon resonance contributions, especially the ∆ resonance. As was pointed out in Refs. 24,32 , while γ 0 is sensitive to resonances, δ LT is insensitive to the ∆ resonance. Measurements of the generalized spin polarizabilities are an important step in understanding the dynamics of QCD in the chiral perturbation region. The first results for the neutron generalized forward spin polarizabilities γ 0 (Q 2 ) and δ LT (Q 2 ) were obtained at Jefferson Lab Hall A. 4 The results for γ 0 (Q 2 ) are shown in the top-left panel of Fig. 3. The statistical uncertainties are smaller than the size of the symbols. The data are compared with a next-to-leading order (O(p 4 )) HBχPT calculation, 32 a next-to-leading order RBχPT calculation and the same calculation explicitly including both the ∆ resonance and vector meson contributions. 24 Predictions from the This might indicate the significance of the resonance contributions or a problem with the heavy baryon approximation at this Q 2 . The higher Q 2 data point is in good agreement with the MAID prediction, but the lowest data point at Q 2 = 0.1 GeV 2 is significantly lower. Since δ LT is insensitive to the ∆ resonance contribution, it was believed that δ LT should be more suitable than γ 0 to serve as a testing ground for the chiral dynamics of QCD. 24,32 The bottom-left panel of Fig. 3 shows δ LT compared to χPT calculations and the MAID predictions. While the MAID predictions are in good agreement with the results, it is surprising to see that the data are in significant disagreement with the χPT calculations even at the lowest Q 2 , 0.1 GeV 2 . This disagreement presents a significant challenge to the present Chiral Pertubation Theory. New experimental data have been taken at very low Q 2 , down to 0.02 GeV 2 for the neutron ( 3 He) 27 and the proton and deuteron. 28 Analyses are underway. Preliminary asymmetry results just became available for the neutron. These results will shed light and provide benchmark tests to the χPT calculations at the kinematics where they are expected to work. Another combination of the second moments, d 2 (Q 2 ), provides an efficient way to study the high Q 2 behavior of the nucleon spin structure, since it is a matrix element, related to the color polarizabilities and can be calculated from Lattice QCD. It also provides a means to study the transition from high to low Q 2 . In Fig. 3,d 2 (Q 2 ) is shown. The experimental results are the solid circles. The grey band represents the systematic uncertainty. The world neutron results from SLAC 17 (open square) and from JLab E99-117 34 (solid square) are also shown. The solid line is the MAID calculation containing only the resonance contribution. At low Q 2 the HBχPT calculation 32 (dashed line) is shown. The RBχPT with or without the vector mesons and the ∆ contributions 24 are very close to the HBχPT curve at this scale, and are not shown on the figure for clarity. The Lattice QCD prediction 33 at Q 2 = 5 GeV 2 is negative but close to zero. There is a 2σ deviation from the experimental result. We note that all models (not shown at this scale) predict a negative or zero value at large Q 2 . At moderate Q 2 , our data show thatd n 2 is positive and decreases with Q 2 . Preliminary results at a Q 2 range of 1-4 GeV 2 for the neutron 29 are available now. New experiments are planned with 6 GeV beam 30 at average Q 2 of 3 GeV 2 and with future 12 GeV upgraded JLab 31 at constant Q 2 values of 3, 4 and 5 GeV 2 . They will provide a benchmark test of the lattice QCD calculations. Conclusion A large body of nucleon spin-dependent cross-section and asymmetry data have been collected at low to moderate Q 2 in the resonance region. These data have been used to evaluate the Q 2 evolution of moments of the nucleon spin structure functions g 1 and g 2 , including the GDH integral, the Bjorken sum, the BC sum and the spin polarizabilities. At Q 2 close to zero, available next-to-leading order χPT calculations were tested against the data and found to be in reasonable agreement for Q 2 of 0.05 to 0.1 GeV 2 for the GDH integral I(Q 2 ), Γ 1 (Q 2 ) and the forward spin polarizability γ 0 (Q 2 ). Above Q 2 of 0.1 GeV 2 a significant difference between the calculation and the data is observed, pointing to the limit of applicability of χPT as Q 2 becomes larger. Although it was expected that the χPT calculation of δ LT would offer a faster convergence because of the absence of the ∆ contribution, the experimental data show otherwise. None of the available χPT calculations can reproduce δ LT at Q 2 of 0.1 GeV 2 . This discrepancy presents a significant challenge to our theoretical understanding at its present level of approximations. Overall, the trend of the data is well described by phenomenological models. The dramatic Q 2 evolution of I GDH from high to low Q 2 was observed as predicted by these models for both the proton and the neutron. This behavior is mainly determined by the relative strength and sign of the ∆ resonance compared to that of higher-energy resonances and deep inelastic processes. This also shows that the current level of phenomenological understanding of the resonance spin structure using these moments as observables is reasonable. The BC sum rule for both the neutron and 3 He is observed to be satisfied within uncertainties due to a cancellation between the resonance and the elastic contributions. The BC sum rule is expected to be valid at all Q 2 . This test validates the assumptions going into the BC sum rule, which provides confidence in sum rules with similar assumptions. Overall, the recent JLab data have provided valuable information on the transition between the non-perturbative to the perturbative regime of QCD. They form a precise data set for a check of χPT calculations. New results at very low Q 2 for the neutron, 27 proton and deuteron 28 will be available soon. They will provide benchmark tests of the Chiral Pertubation Theory calculations in the kinematical region where they are expected to work. Future precision measurements 30,31 of d n 2 at Q 2 = 3 − 5 GeV 2 will provide a benchmark test of Lattice QCD.
2014-10-01T00:00:00.000Z
2006-11-13T00:00:00.000
{ "year": 2006, "sha1": "ceef3176c89ddb7928bd9370e6d83698d8f8fe57", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-ex/0611024v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5d66163f2ce01a5710d12c6e796d6cdbdafbbe19", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
24730070
pes2o/s2orc
v3-fos-license
Standard Model predictions for B ->Kll with form factors from lattice QCD We calculate, for the first time using unquenched lattice QCD form factors, the Standard Model differential branching fractions $dB/dq^2(B \to Kll)$ for $l=e, \mu, \tau$ and compare with experimental measurements by Belle, BABAR, CDF, and LHCb. We report on $\mathcal{B}(B \to Kll)$ in $q^2$ bins used by experiment and predict $\mathcal{B}(B \to K \tau \tau) = (1.44 \pm 0.15) 10^{-7}$. We also calculate the ratio of branching fractions $R^\mu_e = 1.00023(63)$ and predict $R^\tau_l = 1.159(40)$, for $l=e, \mu$. Finally, we calculate the"flat term"in the angular distribution of the differential decay rate $F_H^{e, \mu, \tau}$ in experimentally motivated $q^2$ bins. INTRODUCTION The rare semileptonic decay B → K + − is a b → s flavor-changing neutral current process that only occurs through loop diagrams in the Standard Model, making it a promising probe of new physics. To make predictions for Standard Model observables, or extract information about potentially new short distance physics, knowledge of associated hadronic matrix elements is required. Because hadronic matrix elements quantify nonperturbative physics, the only first-principles method for calculating them is lattice QCD. Hadronic matrix elements for semileptonic decays are parameterized by form factors. For processes that occur readily in the Standard Model only the vector and scalar form factors f +,0 are phenomenologically relevant. The study of rare decays requires knowledge of the tensor form factor f T as well. All form factors are potentially important in the presence of new physics. There is an active effort [1][2][3][4][5] to constrain new physics using experimental results for B → K + − , often in combination with other rare B decays. In the past, the needed form factor information for these works has come from light cone sum rules (cf. Refs. [6][7][8]), valid at low q 2 . In a more first principles approach, Ref. [1] calculates the form factors in lattice QCD at high q 2 using the so-called quenched approximation [9], then extrapolates to low q 2 using the model-dependent BK parameterization [10]. However, given the number and precision of recent experimental measurements of this decay, and the importance of stringent tests of the Standard Model in such rare processes, Standard Model predictions free of uncontrolled approximations have become crucial. In the lattice approach, for instance, it is imperative to go beyond the uncontrolled quenched approximation. In this letter we present Standard Model results that are based for the first time on unquenched lattice calculations that take effects of up, down, and strange sea quarks into account. Furthermore, our results are extrapolated over the full kinematic range of q 2 in a model independent way. We then make detailed comparisons of these new Standard Model predictions with experimental measurements at the B-factories Belle [11] and BABAR [12], by CDF [13], and most recently by LHCb [14,15]. We note that there are preliminary unquenched lattice QCD results for the form factors by Liu et al. [16] and the Fermilab Lattice and MILC collaboration [17]. LATTICE QCD CALCULATION We begin with an overview of the lattice QCD calculation of the form factors f 0,+,T . Ref. [18] contains details, provides the information required to reconstruct the form factors, and calculates useful ratios of form factors. Ensemble averages of two and three point correlation functions are performed using a subset of the MILC 2 + 1 asqtad gauge configurations [19]. We use two lattice spacings, a ≈ 0.12 fm and 0.09 fm, to allow extrapolation to the continuum and simulate at multiple light sea-quark masses to guide a chiral extrapolation to physical lightquark mass. The valence quarks in our simulation are NRQCD b quarks [20,21] and HISQ light and strange quarks [22][23][24]. Data were generated using local and smeared b quarks, U(1) random wall sources for the light and strange valence quarks, and four values of momenta to guide the kinematic extrapolation. We generate three point data for several temporal spacings between the B meson and kaon to improve our ability to extract three point amplitudes. We extract hadronic matrix elements from simultaneous fits to two and three point data using Bayesian fitting techniques [25] and incorporate correlations among data for different matrix elements and at different momenta. Effective vector and tensor lattice currents are matched to the continuum using one loop, massless-HISQ lattice perturbation theory [26]. We extrapolate to physical light quark mass and zero lattice spacing using fit ansätze based on partially quenched staggered chiral perturbation theory [27]. The extrapolations include NLO chiral logs, NLO and NNLO chiral analytic terms to accommodate effects of omitted higher order chiral logs, and finite volume effects. We neglect the O(a 2 ) taste-breaking discretization effects in [27], but accommodate generic discretization effects through O(a 4 ) in the extrapolation, including light-and heavy-quark mass-dependent discretization effects. Using the physical extrapolated results we generate synthetic data for each form factor, restricted to the region of q 2 for which simulation data exist. We extrapolate these data over the full kinematic range of q 2 using the model-independent z expansion [28,29] with the Bourrely, Caprini, and Lellouch (BCL) parameterization [30]. The chiral/continuum extrapolation errors for f + are shown in the top plot of Fig. 1 in the region of q 2 for which simulation data exist. Following the method outlined in [31], the error is separated into components by grouping related fit parameters. The chiral extrapolation error ("chiral") contains errors in f + due to extrapolating to physical light quark mass, strange quark mass interpolations to correct slight mistunings, small contributions from mass differences due to the use of a mixed action (asqtad sea and HISQ valence quarks), and finite volume effects. Discretization errors ("disc.") include lightand heavy-quark mass-dependent, and mass-independent discretization errors. Statistical errors ("stat.") represent the errors associated with the form factors obtained from the correlation function fits, i.e. the data for the chiral/continuum extrapolation fits. Errors due to input parameters are labeled "inputs". Components of the kinematic extrapolation error are plotted in the bottom panel of Fig. 1, where the region of q 2 for which simulation data exist is indicated on the plot. The "stat." error is associated with the synthetic data generated by the chiral/continuum extrapolation, the "z exp." error is the sum in quadrature of errors from coefficients of the z expansion, and the "inputs" error is from uncertainty in input parameters. In the region of simulated q 2 the dominant source of error is from the synthetic data. At low q 2 the error is roughly split between the synthetic data and the kinematic extrapolation. Errors associated with input parameters are negligible. A similar analysis of f 0,T in Ref. [18] reveals similar behavior. In addition to fit errors, systematic errors from matching, electromagnetic and isospin breaking effects, and omission of charm sea-quarks contribute a combined 4% error (dominated by matching). The form factors, including all sources of error, are shown in Fig. 2 with shaded bands indicating the region of simulation data. STANDARD MODEL OBSERVABLES Using form factors, determined for the first time from unquenched lattice QCD, we calculate several Standard Model observables that either allow comparison with experiment or make predictions. Our form factor results are, within errors, equivalent for B 0 → K 0 + − (B 0 → K 0 + − ) and B ± → K ± + − . The observables we calculate from the form factors introduce additional dependence on M B , M K , and τ B . In what follows we calculate isospin averaged values for each observable. Values for most input parameters are taken from the PDG [32]. We use 1/α EW = 128.957(20) [33], |V tb V * ts | = 0.0405(8) [34], and Wilson coefficients from [35] with 2% errors [36]. Input parameter errors are propagated to errors reported for observables [37]. Following Ref. [1] and restricting ourselves to the Stan- dard Model, the differential decay rate is where a and c , defined in [18], are functions of form factors, Wilson coefficients, and other input parameters. We convert decay rates into branching fractions using the B meson's mean lifetime, B = Γ τ B . The resulting differential branching fractions are shown for decay into a generic light dilepton final state in Fig. 3a and a ditau final state in Fig. 3c. Differential branching fractions for dielectron and dimuon final states are nearly identical and when a generic light dilepton final state is referenced, values are obtained using the average differential branching fraction. Figs. 3b and 3d show error contributions from form factors, input parameters, and Wilson coefficients, denoted C i . Uncertainty in the form factors dominates. Form factor errors are better controlled in the region of simulated q 2 . As a result, differential branching fractions for B → Kτ + τ − and for light dilepton final states at large q 2 are more precisely determined. Integrating the differential branching fractions over q 2 bins defined by (q 2 low , q 2 high ) permits direct comparison with experiment, Integrating over the full kinematic range yields the total branching fractions 10 7 B e (4m 2 e , q 2 max ) = 6.14 ± 1.33, 10 7 B µ (4m 2 µ , q 2 max ) = 6.12 ± 1.32, 10 7 B τ (14.18 GeV 2 , q 2 max ) = 1.44 ± 0.15, where For the ditau final state we begin the integration at 14.18 GeV 2 to account for the experimentally vetoed ψ(2S) region. A detailed comparison of our Standard Model branching fraction results with experiment, and other calculations, is given in Table I. The results of Altmannshofer and Straub [4] use form factors from Ref. [38], in which quenched lattice [39] and light cone sum rule [6] results are combined. The results of Bobeth et al. [5] use form factors obtained from light cone sum rules in Ref. [7] and extrapolated to large q 2 via z expansion. Correlations among form factors are accounted for in the calculation of the ratios. We give values of the branching fraction ratios in different q 2 bins in Tables II and III. The angular distribution of the differential decay rate is given by where θ is the angle between the B and − as measured in the dilepton rest frame. The "flat term" F H , introduced by Bobeth et al. [42], is suppressed by m 2 in the Standard Model and is potentially sensitive to new physics [1,5]. The "forward-backward asymmetry" A F B is zero in the Standard Model (up to negligible QED J/ψ ψ (2S) Belle BABAR CDF LHCb [14] LHCb [15] (a) Belle [11], BABAR [12], CDF [13], and LHCb [14,15] contributions [42,43]) so is also a sensitive probe of new physics. The flat term [42] F H (q 2 low , q 2 high ) = is constructed as a ratio to reduce uncertainties. Evaluated in experimentally motivated q 2 bins, values for F e,µ,τ H are given in Tables II and III. SUMMARY AND OUTLOOK Employing unquenched lattice QCD form factors for the rare decay B → K + − [18], we calculate the first model-independent Standard Model predictions for: differential branching fractions; branching fractions integrated over experimentally motivated q 2 bins; ratios of branching fractions potentially sensitive to new physics; and the flat term in the angular distribution of the differential decay rate. Where available, we compare with experiment and previous calculations. For q 2 > ∼ 10 GeV 2 our results are more precise than previous Standard Model predictions. For all q 2 our results are consistent with previous calculations and experiment.
2014-02-25T13:12:05.000Z
2013-06-03T00:00:00.000
{ "year": 2013, "sha1": "af8bf3bc7d88f4399378136c09e5010f11227e3b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1306.0434", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "af8bf3bc7d88f4399378136c09e5010f11227e3b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
263804671
pes2o/s2orc
v3-fos-license
Effects of personal and health characteristics on the intrinsic capacity of older adults in the community: a cross-sectional study using the healthy aging framework Background Intrinsic capacity (IC) can better reflect the physical functioning of older adults. However, few studies have been able to systematically and thoroughly examine its influencing factors and provide limited evidence for the improvement of intrinsic capacity. The objective of this study was to provide a comprehensive description of the overall decline in intrinsic capacity among older persons in the community. Additionally, the study aimed to analyze the composition of the five domains of reduction, compare the rate of decline among older adults and investigate the factors that influence this decline. Methods This was a cross-sectional study conducted in the Chinese community. The self-designed general characteristics questionnaire was created based on the healthy aging framework and a systematic review. Intrinsic capacity was assessed with the Mini-Mental State Examination (MMSE), Geriatric Depression Scale (GDS-15), Community Health Record Management System (CHRMS), Mini Nutritional Assessment Brief Form (MNA-SF), and Short Physical Performance Battery (SPPB). The influencing factors of intrinsic capacity were investigated using stepwise logistic regression. Results A total of 968 older adults with a mean age of 71.00 (68.00, 76.75) were examined, and 704 older adults (72.7%) showed a decline in intrinsic capacity. There was a decline in at least one domain in 39.3% of older adults, with reductions in each domain ranging from 5.3% (psychological) to 52.4% (sensory). The study examined the composition of domains that experienced a decline in intrinsic capacity. It was found that a combination of sensory and locomotor domains showed the most significant decrease in 44.5% (n = 106) of individuals who experienced a decline in the two domains. Furthermore, a combination of sensory, cognitive, and locomotor domains exhibited a significant decrease in 51.3% (n = 44) of individuals who experienced a reduction in three domains. Lastly, a combination of sensory, vitality, cognitive, and locomotor domains showed the most significant decline in four domains, accounting for 60.0% (n = 15) of the population. Older adults had a higher risk of intrinsic capacity decline if they were older (95% CI:1.158–2.310), had lower education, lived alone (95% CI: 1.133–3.216), smoked (95% CI: 1.163–3.251), high Charlson Comorbidity Index (95% CI: 1.243–1.807) scores, did not regular exercise (95% CI:1.150–3.084), with lower handgrip strength (95% CI: 0.945–0.982). Conclusions We found a relatively high prevalence of intrinsic capacity; more attention should be paid to older adults who are older, less educated, live alone, and have more comorbidities. It is imperative to prioritize a healthy lifestyle among older persons who exhibit smoking habits, lack regular exercise, and possess inadequate handgrip strength. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-023-04362-7. Conclusions We found a relatively high prevalence of intrinsic capacity; more attention should be paid to older adults who are older, less educated, live alone, and have more comorbidities.It is imperative to prioritize a healthy lifestyle among older persons who exhibit smoking habits, lack regular exercise, and possess inadequate handgrip strength. Keywords Intrinsic capacity, Prevalence, Influencing factors, Older adults, Community Background Population aging is a significant medical and social dilemma [1].As of 2020, the global population of individuals classified as elderly has reached 99, and it is projected that by the year 2050, the elderly population will approach approximately 90 million.This tendency is expected to increase in elderly individuals, necessitating long-term care [2].Studies reveal that around 25% of the worldwide economic cost of disease in adults over 60 is attributable to health issues [3]. The ability to perform a specific function is the essence of health in old age [4].The World Health Organization (WHO) defined healthy aging in 2015 as obtaining and preserving the functional abilities that enable older persons to achieve well-being, underscoring the importance of functional ability.Among them, intrinsic capacity is a crucial concept, which defined as the sum of individual physical and mental abilities.It contains five domains: locomotor, psychological, sensory, cognitive, and vitality [4], crucial in maintaining and enhancing functional abilities and promoting healthy aging in older adults [5].According to the healthy aging framework, intrinsic capacity is a combination of physical and mental health functions that are genetically based and influenced by personal and healthy characteristics.Intrinsic capability is determined by three primary factors: personal qualities, genetic inheritance, and healthy attributes [4]. International researchers have attracted the concept of intrinsic capacity to improve care for older adults.The WHO recommends screening older people for declining intrinsic capacity as early as possible.The measurement of intrinsic capacity has not been standardized, and various measures have been used to assess intrinsic capacity in different settings and populations, which may lead to different outcomes.The prevalence of declining intrinsic capacity ranges from 19.23% [6] to 89.3% [7], but most studies show this to be more severe.Lower intrinsic capacity is significantly associated with increasing age, female gender, lower educational level, lower wealth and more chronic diseases, and subjective social status [8,9].Aging affects almost all physiological processes, but changes in body composition are most observable [10].There is a lack of studies that have examined the association between anthropometric measures, such as body mass index and waist-to-hip ratio, as well as body composition measures, such as skeletal muscle mass and body fat mass, with intrinsic capacity in older adults.These measures have been widely used to objectively assess the nutritional and energy metabolic status of older adults and have been recognized as significant predictors of physical health in this population [11].However, limited research has been conducted to validate their relationship with intrinsic capacity.A study surveyed 376 participants from hospital, whereby the correlation between intrinsic capacity and anthropological indicators including fat-free mass, body fat percentage, and visceral fat domain were analyzed.However, no statistically significant correlation was found between these variables [6].Since older adults in the community had better health than inpatients, we need to verify the relationship in older adults living in the community with a larger sample. It is noticed that settings and wealth can significantly influence intrinsic capacity, whereas low-and middleincome countries will bear a greater burden caused by population aging.To the best of our knowledge, the studies were primarily focused on high-income countries, or even if intrinsic capacity studies were conducted in low-and middle-income domains, the setting was chosen cities with higher economic levels, such as Beijing, China [12], and Nagoya, Japan [13], resulting in limited evidence for economically disadvantaged domains.China is a developing country and one of the most rapidly aging countries divided into four regions: the east, west, south, and north, with the west having the lowest per capita disposable income.The Xinjiang Uyghur Autonomous Region is situated in the western part of China.This region exhibits a distinctive environment, defined by prolonged winters and challenging travel conditions.These factors may contribute to less physical activity among residents.Additionally, the local population's dietary preferences, including a high consumption of pasta and meat, may make them more prone to obesity.We conducted this study in Urumqi, the capital of Xinjiang, and randomized whole-group selected elderly adults to participate in our research, aimed to describe the status of the intrinsic capacity of older adults, compare the decline in the intrinsic capacity of older adults with different characteristics, and explore the influencing factors of the intrinsic capacity of older adults in the community. Study design and participants We used the randomized whole-group sampling method.Three of the seven districts were randomly selected based on the administrative division of Xinjiang Urumqi, with one community health service center chosen randomly from each district to conduct a whole-group survey of older adults in January, June, and July 2022.The inclusion criteria were (a) being over the age of 65 years and (b) with basic communication skills.The exclusion criteria were (a) severe mental system diseases, (b) metabolic diseases, (c) vital organ failure, and (d) severe disability that prevented participants from cooperating with the test.This study was approved by the Ethics Committee of Xinjiang Medical University (approval no.: XJYKDXR20220117021). Data collection Initially, older adults were recruited using telephone appointments, resident WeChat group publicity, and onsite publicity.Second, with the consent of older adults, we conducted a face-to-face investigation.Professionally trained surveyors conducted one-to-one surveys for older adults.The surveyor asked for the paper version of the questionnaire, older adults responded and then recorded, and the surveyor completed physical measurements.Finally, the research assistant verified the completeness of the items.This study included 1042 older adults, 74 of whom had missing critical information, and the response rate was 92.9%. Intrinsic capacity The measurement consisted of 5 domains proposed by WHO, and the tool selection was based on a combination of the WHO Integrated Care for Older People (ICOPE) screening tool [14] and research appropriate to the purpose of this study [15]. The short physical performance battery (SPPB) measured the locomotor domain, which contained a walking test, chair stands, and standing balance.For the walking test, people were required to complete the test at the usual walking speed of 4 m and repeat it twice.People had to stand up from a chair five times while keeping their feet flat on the ground and their arms folded across their chest.For the standing balance, we assessed whether it is feasible to stand side by side, in a semi-tandem stance and full-tandem for 10 s separately.The SPPB score ranges from 0 to 12; a score below 8 represents a decrease in the locomotor domain. The cognitive domain was measured using the Mini-Mental State Examination (MMSE).The thresholds for education are 17, 20, and 24 points for primary, juniormiddle, and senior high school, respectively; a score below the threshold indicates a decline in the cognitive domain. The psychological domain was assessed using the Geriatric Depression Scale (GDS-15), which has a total score of 0 to 15; a score less than 8 indicates a decline in the psychological domain. The vitality domain was scored from 0 to 14 on the Mini Nutritional Assessment Brief Form (MNA-SF).A score of 11 or less indicates vitality domain decline. The investigator examined the decline in sensory domains by utilizing the Community Health Record Management System (CHRMS) to retrieve the results of vision and hearing examinations conducted on older adults within the past year.Alternatively, the investigator considered self-reported accounts from older adults regarding any decline in vision and/or hearing impacting their daily activities considerably. The diagnostic criteria for a decline in intrinsic capacity involved assessing a decrease in any of the five domains of intrinsic capacity.A reduction in a single domain was assigned a score of 1 point, and the cumulative score ranged from 0 to 5. A higher score indicated a more significant decline in intrinsic capacity. Variables The variables impacting intrinsic capacity have been the subject of a systematic review by the research team.This review has been published before [16] and is registered on Prospero under the CRD42022292609 registry number.To incorporate indications as thoroughly as possible, we combines the study's actual circumstances with the system evaluation findings (see Additional file 1: Appendix 1 for systematic review of the extraction of factors related to the decline in intrinsic capacity).Simultaneously, a general information questionnaire for older persons was developed, and the indicators were arranged and classified into personal and health characteristics per the healthy aging framework. A person was considered a smoker if they had smoked regularly or cumulatively for 6 months.Adults who drink alcohol on occasion, often, or every day were classified as drinkers.Regular exercise was defined as older adults exercising at least three times per week, at least 30 min each time, and for more than 6 months.CCI is the summation of the assigned weights of seventeen comorbidities. The handgrip strength (HGS) of older adults was measured using a grip dynamometer (EH10, CAMRY, China)), and the highest handgrip strength among the three tests was taken.With the average measurement taken, a meter ruler measured the calf circumference (CC) twice on each side.Bioelectrical impedance analysis was performed with a body composition analyzer (DBA-210, DONGHUAYUAN Medical, China) to estimate body composition, including body fat mass, skeletal muscle mass, body fat percentage, waist-to-hip ratio, and visceral fat domain. Statistical analysis The count data were presented as frequencies, while the normality of the continuous data was assessed using the Shapiro-Wilk test.Subsequently, the descriptive statistics for the continuous data were provided as either the mean ± SD (standard deviation) or the median (interquartile range).The chi-square test was employed to assess the disparity in the decline of intrinsic capacity among older persons with varying characteristics.The Pareto chart visually depicts the "two-eighths principle", which posits that 80% of problems may be attributed to 20% of causes, was used to examined the composition of domains that experienced a decline in intrinsic capacity. We considered variables that were statistically different in the chi-square test, stepwise logistic regression was incorporated to derive the factors influencing intrinsic capacity. The analysis was performed using IBM SPSS Statistics 25.0, and a p-value of less than 0.05 was considered statistically significant.We reported odds ratios (OR) and 95% confidence intervals (CI) for the regression model. Sample characteristics The mean age varied from 60.0 to 93.0 years, with 58.5% female.Table 1 lists the characteristics of older adults. Intrinsic capacity among older adults The intrinsic capacity score of the elderly in the community was [0.00 ~ 5.00, 1.00 (0.00, 2.00)], and the rate of decline was 72.7%, of which the proportion of decline in the locomotor, cognitive, psychological, vitality, sensory was 31.4% (n = 304), 19.7% (n = 191), 5.3% (n = 51), 11.1% (n = 107), and 52.4% (n = 507) respectively, and the percentages of those who experienced a decrease in intrinsic capacity in one domain to five domains were 39.3% (n = 380), 22.8% (n = 221), 8.0% (n = 77), 2.4% (n = 23) and 0.3% (n = 3) respectively.A Pareto chart study, incorporating the "two-eighths principle", reveals that within the domain of decline, one specific item of sensory may be classified as "critical", and the cumulative proportion associated with this particular item amounts to 60.3%(Fig.1a).Sensory-locomotor, sensory-cognitive, and cognitive-locomotor were three "critical" items among the two domains that decreased with a collective percentage of 79.2%(Fig.1b).Among the three domains of deterioration, three items, sensory-locomotor-cognitive, sensory-locomotor-vitality, and psychological-sensory-locomotor, were "critical", with a combined percentage of 79.2%(Fig.1c).Among the four domains of decline, sensory-locomotor-cognitive-vitality was "critical", with a total percentage of 56.5%(Fig.1d).People aged 75-89 years old, had a lower education, lived alone, smoked and did not exercise regularly, had a lower handgrip strength, and had a higher CCI.Lower skeletal muscle mass was significantly associated with the more severe intrinsic capacity decline (p < 0.05).Still, no significant difference was observed in gender, monthly income, sources of finance, smoking history, drinking history, BMI, CC, body fat mass, fat percentage, waist-to-hip ratio, and visceral fat domain (Table 1). Condition of intrinsic capacity This study assessed the factors associated with the decline in intrinsic capacity in the community over the age of 60, and the prevalence of a decrease in intrinsic capacity in older adults was relatively high.Similarly, other studies have found high decline rates in intrinsic capacity in older adults in the community.A study found a prevalence of 77.4% of impaired intrinsic capacity in a survey of a senior-friendly community in Beijing over the age of 75 [17].In a study of 759 older adults aged 70-89 years with memory impairment in France, 89.3% of participants had one or more conditions related to a decline in intrinsic capacity [7].Some older adults showed a decrease in 1-5 domains, with the highest reduction percentage in just 1 domain.As the prevalence of declining domains increased, there was a corresponding decrease in the proportion of older persons exhibiting decline across 4-5 domains.Among the various domains of intrinsic capacity, about half of the older adults showed decreases in the sensory domain, followed by the locomotor domain.The aging process is often accompanied by hearing and vision loss.Still, most older adults consider this an average physiological change with age, thus neglecting the treatment and management of symptoms, which ultimately seriously impact daily life.Aging also lead to musculoskeletal system disorders in the elderly, according to preliminary research conducted by the group, which found that the prevalence of sarcopenia in older adults in Xinjiang Urumqi was 38.8% [18], which impaired locomotor function in the elderly and even caused adverse health outcomes such as disability and reduced quality of life [4]. When examining the domains contributing to the decline in intrinsic capacity among older adults, it was observed that individuals who experienced a decrease in only one domain predominantly exhibited reductions in the sensory domain.In cases where individuals experienced a decline in two domains, the sensory and locomotor domains were found to have the most pronounced declines.Similarly, among those who experienced a decrease in three domains, the sensory, cognitive, and locomotor domains exhibited the most Fig. 1 The composition of domains that decline in intrinsic capacity.The Pareto chart visually depicts the "two-eighths principle", which posits that 80% of problems may be attributed to 20% of causes.a It reveals that within the domain of decline, one specific item of sensory may be classified as "critical", and the cumulative proportion associated with this particular item amounts to 60.3%.Furthermore, there are 4 domains, namely locomotor, psychological, sensory, cognitive, and vitality, which collectively account for a cumulative proportion of 39.7%.These categories might be considered as "insignificant" non-terms of their contribution.b Sensory-locomotor, sensory-cognitive, and cognitive-locomotor were three "critical" items among the two domains that decreased with a collective percentage of 79.2%.Furthermore, 7 items of sensory-vitality, vitality-locomotor, psychological-sensory, vitality-cognitive, psychological-vitality, psychological-locomotor, locomotor-cognitive are "insignificant", with a total percentage of 20.8% for these 7 items.c Among the three domains of deterioration, three items, sensory-locomotor-cognitive, sensory-locomotor-vitality, and psychological-sensory-locomotor, are "critical", with a combined percentage of 79.2%.d Among the four domains of decline, sensory-locomotor-cognitive-vitality is "critical", with a total percentage of 56.5%.Furthermore, the study includes four additional items categorized as "insignificant" for the domains of psychological-sensory-locomotor-cognitive, psychological-vitality-cognitive-locomotor, psychologi cal-sensory-vitality-cognitive, and psychological-sensory-locomotor-vitality.These items collectively account for a total proportion of 43.5% significant reductions.Moreover, individuals who experienced a decline in the four domains showed the most pronounced decreases in the sensory, vitality, cognitive, and locomotor domains.This observation indicates a strong correlation between the sensory and locomotor domains.Prior research has shown that elderly individuals who experience visual or auditory impairments exhibit a higher propensity for restricted physical movement [19].In their study, Yu [20] employed latent category analysis to investigate the trajectory of intrinsic capacity decline.Their findings revealed a distinct pattern characterized by a pronounced decrease in sensory domains and a moderate reduction in locomotor domains.This pattern can be attributed to the negative impact of sensory function decline on older adults' balance, physical stability, and overall physical functioning. Consequently, these factors contribute to a decrease in balance and physical stability.The rationale behind this phenomenon is that a decline in sensory function among older adults can contribute to reduced balance, body stability, and overall physical functioning [19,21].Consequently, this can lead to insecurity regarding their environment and apprehension towards engaging in physical activities.As a precautionary measure, older adults with diminished sensory abilities may limit their activities to prevent accidental injuries, which can ultimately result in a decline in locomotor function. Influencing factors of intrinsic capacity We explored the effect of personal characteristics on the intrinsic capacity of older adults in the community.Our study revealed a significant negative correlation between increased age and intrinsic capacity, indicating that the loss in intrinsic capacity may have been a progressive process associated with aging.These findings are consistent with past studies in this area [22].This could be the natural progression of underlying diseases and aging [14].The progressive accrual of molecular and cellular impairments accompanying the aging process leads to a reduction in physiological capacities and an elevation in disease susceptibility, culminating in an overall fall in individual capabilities [4].Although there is a tendency for intrinsic capacity to decline with age, a particular population of very old adults exists with intrinsic capacity levels similar to those of younger older adults.Recruitment was used during the implementation of this study, and fewer older people over 90 years of age had the ability to travel to the research site on their own, resulting in a limited number of older people participating.At the same time, older adults who volunteered to participate in the study were more proactive in caring for their own health, which may be one of the reasons for their higher level of intrinsic capacity.Still, it suggests that we should focus on the diversity of older adults.In terms of education, the higher the level of education, the higher the level of intrinsic capacity, consistent with the study that there is a significant positive correlation between lower education levels and lower intrinsic capacity scores [23].Older adults with better education may have a higher level of financial investment in health and also have access to health resources than those with lower education, so they are more likely to develop healthy life behavior patterns and greater self-care skills, which are conducive to maintaining a healthy state of physical functioning, cognition, and psychology [22].Older adults who live with their spouses and children have higher intrinsic capacity scores, which may be related to the more convenient and accessible level of material support, emotional support, and life care.The WHO issued a manual on integrated care for older adults to better guide community care for older adults.This indicates that caregivers should be involved in the overall care of older adults, implying that living with others may be a modifiable condition for healthy aging [14]. We similarly explored the impact of health characteristics on the intrinsic capacity of older adults in the community.Those with smoking and higher CCI increase the risk of a decline in intrinsic capacity, which may be related to a long-term impairment of physical function in chronic diseases.Smoking has been found to elevate the possibility of experiencing a decrease in intrinsic capacity.Numerous harmful compounds in cigarettes have been linked to various severe ailments, including cardiovascular disorders, respiratory afflictions, and malignancies, particularly among the aged population.Research findings indicate a positive correlation between smoking behavior and an elevated susceptibility to cognitive impairment [24].According to a study conducted in Shanghai, a total of 4,190 older persons were surveyed, revealing that visual impairment was twice as prevalent among individuals who engage in heavy smoking compared to those who do not smoke [25].This association may be attributed to the adverse effects of smoking on ocular health, including the development of cataracts, age-related macular degeneration, glaucoma, and other ophthalmic diseases [26].In low-and middleincome countries, more than half of older adults are likely to have multimorbidity [27].Therefore, it is crucial to emphasize the value of functional capacity even with chronic disease, as stated in the WHO World Report on Aging and Health [4].Notably, the health status of individuals exhibits dynamic changes.When assessing the health needs of older adults, it is critical to consider the impact of the interactions between these diseases on functional capacity, besides the specific conditions they may be experiencing [28].Exercise has been shown to improve functional performance [29] and cognitive function [30] and improve psychological problems [31]. The absence of consistent physical activity is associated with a greater risk of decreased intrinsic capacity [32].Through promoting exercise engagement among older adults, the unnecessary dependence on medical care can be minimized [33,34].Lower handgrip strength was associated with a higher risk of declining intrinsic capacity.The most significant single biomarker of health is handgrip strength, which may also evolve into a critical indicator for tracking overall intrinsic capacity [35,36].A positive correlation exists between reduced handgrip strength in older persons and cognitive decline and mental illness.This association has a detrimental impact on intrinsic capacity, specifically in preserving muscle strength.Consequently, these findings hold substantial therapeutic significance [37].This study represents a limited number of cases in which the impact of anthropometric indicators of anthropometric composition on the intrinsic capacity of older persons has been investigated.Skeletal muscle mass refers to the proportion of skeletal muscle tissue within the overall body composition, indicating an individual's health status.Older adults with higher skeletal muscle mass were shown to have better intrinsic capacity in univariate analyses.However, no significant differences were shown when this variable was included in the regression model.A survey of 376 geriatric patients showed that IC scores were not associated with body composition variables such as fat-free mass, percentage of body fat, and visceral fat domain [6].This further suggests that IC may have a higher association with muscle function but not muscle mass. The economic level does not influence the intrinsic capacity of individuals.While the economic conditions in Xinjiang may not be high, it is essential to note that the community health service centers examined in this study have a comprehensive elderly service system.Among these centers, one is recognized as a national model community health service center, which actively promotes providing contractual services by family physician teams, managing chronic diseases, and caring for older adults.These efforts significantly contribute to meeting the health needs of the elderly population. The present study has several advantages.The study incorporated a substantial sample size of over 1,000 older persons residing in the neighborhood.The researchers employed randomized whole-group sampling to ensure the selection of a study population that accurately represents the larger population.Intrinsic capacity was measured according to the WHO-recommended approach and other studies that have been formally validated with valid and reliable results.The study factors were included based on theoretical and systematic evaluations, and they included less-studied body measurements and body composition indicators that were measured scientifically in a rigorous manner.Furthermore, our study emphasized the analysis of distinct domains associated with the deterioration of intrinsic capacity.Identifying the precise domain or combination of domains that predominantly contribute to losing intrinsic capacity may prove valuable in directing interventions to preserve intrinsic capacity among older individuals in subsequent research endeavors. The cross-sectional design of this study limited the assessment of causal relationships between variables.It is recommended that future longitudinal studies with causal variables should be conducted to assess causality.Furthermore, although we obtained a relatively high prevalence of intrinsic capacity, the majority is likely underestimated because the older adults could care for themselves and were concerned about their health information. Conclusion Our findings show that declining intrinsic capacity is common in low-and middle-income domains, and there is growing evidence for factors influencing intrinsic capacity.Our results suggest that the prevalence of declining intrinsic capacity is higher among older adults in the community.More attention should be given to older adults who are less educated, live alone, and have more comorbidities.A healthy lifestyle should be emphasized for older adults with a smoking history, no exercise habits, and low handgrip strength. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Table 1 Comparison of intrinsic capacity in characteristics IC Intrinsic capacity, CCI Charlson Comorbidity Index, CC Calf circumference, HGS Handgrip strength Table 2 Stepwise logistic regression of factors influencing intrinsic capacity CCI Charlson Comorbidity Index, HGS Handgrip strength
2023-10-11T14:07:31.464Z
2023-10-10T00:00:00.000
{ "year": 2023, "sha1": "cac268308f5aa96f09b72567c92213dab4830dda", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/counter/pdf/10.1186/s12877-023-04362-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a2bf633a24943f148a9d2eb1d69b83b41259727", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258842996
pes2o/s2orc
v3-fos-license
Clinico-Pathological Spectrum of Alveolar Soft Part Sarcoma: Case Series from a Tertiary Care Cancer Referral Centre in India with a Focus on Unusual Clinical and Histological Features Objective: Alveolar soft part sarcoma (ASPS) is characterized by distinctive histomorphology of variably discohesive epithelioid cells arranged in nests and translocation of t(x;17) (p11.2;q25) resulting in ASPSCR1-TFE3 fusion. The aim of the present study is to review the clinical, histopathological, and immunohistochemical profile of ASPS with a focus on unusual histological features. Material and Methods: The present study is retrospective and descriptive. All cases with a diagnosis of ASPS were retrieved with clinical and radiology details. Results: 22 patients of ASPS were identified. The most common site was the lower extremity and the size range was 3-22 cm. 54.5% of the patients had metastasis, with the lung as the most common site. Metastasis preceded detection of primary tumour in two cases. All cases showed similar histopathology of monomorphic epithelioid cells arranged in nests encircled by sinusoidal vasculature. Architecturally, the organoid pattern (81.8%) was followed by the alveolar pattern. 68.2% of the cases showed apple bite nuclei as the predominant nuclear feature. Rare nuclear features included binucleation (n=13), multinucleation (n=8), pleomorphism (n=4), nuclear grooves in three cases and intranuclear inclusion in one case, mitosis (n=5), and focal necrosis (n=6). All cases were positive for TFE3 and negative for AE1/AE3, EMA, HMB45, PAX8, MyoD1, SMA, synaptophysin, and chromogranin. Only two cases showed focal S100 positivity while one showed focal desmin positivity. Conclusion: Diffuse strong nuclear TFE3 positivity is sensitive for ASPS in an appropriate clinicoradiological context. Due to the high propensity for early metastasis, complete metastatic work-up and long term follow up is recommended. INTRODUCTION Alveolar soft part sarcomas (ASPS) are rare soft tissue tumors of uncertain histogenesis having a distinctive histomorphological appearance of variably discohesive epithelioid cells arranged in nests and have a specific translocation of t(x::17)(p11.2;q25) resulting in ASPSCR1-TFE3 fusion (1). Marked histologic overlap with other tumors, and tumor at unusual site and unusual clinical presentation with mass at the metastatic site prior to the identification of the primary make the diagnosis tricky. The differential diagnoses include a broad range of mesenchymal and non-mesenchymal neoplasms such as paraganglioma, PEComa, granular cell tumor, metastatic carcinoma such as metastatic renal cell carcinoma, hepatocellular carcinoma, and adrenal cortical carcinoma. The present study analyzes the clinical, histopathological, and immunohistochemical profile of ASPS and clinical outcomes in cases, wherever available. Particular emphasis was given on the unusual histological features. The differential diagnosis and potential pitfalls in the current era of the increasing spectrum of TFE3 rearranged tumors have been highlighted. MATERIAL and METHODS The present study is retrospective and all cases with a histopathological diagnosis of ASPS were retrieved from the archives of the department of Oncopathology from 2012 to 2021 at a tertiary care cancer center. Demographic, clinical, and radiological data were retrieved from the case records. Cases with non-availability of either immunohistochemistry (IHC) or paraffin blocks were excluded. Histomorphological and immunohistochemical characteristics were analyzed in each case. TFE3 immunohistochemistry was performed wherever unavailable. The histological parameters evaluated were growth pattern, presence of crystals confirmed by periodic acid-Schiff stain with diastase (PAS-D), nuclear features, presence of inflammation, fibrous septa, vascular invasion, necrosis, cystic change and myxoid change. RESULTS A total of 22 patients (0.4%) with ASPS out of 5541 soft tissue sarcomas were identified from 2012 to 2021. The patient age range was 2-47 years and the median age was 27 years. The M:F ratio was 0.8:1. The most common site was the lower extremity in 45% (10/22) of the cases followed by the upper extremity 27.3% (6/22) of the cases, retroperitoneum in 18.1% (4/22) cases and one case each in the head and neck, chest wall, and lung. Clinical, radiological, and outcome details are given in Table I. Tumor size varied from 3 cm to 22 cm with a mean size of 7.8 cm. Lymph node metastasis was seen in 2 cases only. Distant metastasis (54.5%;12/22) was more frequent than lymph node metastasis. Out of these 12 cases with metastasis, 91.7% of the patients had synchronous metastasis while three showed metachronous metastasis. The lung was the most common site (90.9%) followed by the brain, bone, and liver ( Figure 1). In one case of ASPS of the forearm, an unusual site of metastasis was bilateral nasal cavities, with biopsy showing a submucosal tumor. Five patients amongst these had multiple site metastasis. The metastasis preceded detection of the primary tumor in two cases. One case presented with a posterior fossa mass and the second case with A C B D a pathological fracture of the right femur, and both were diagnosed as ASPS on biopsy. Subsequently, PET revealed a primary mass in the left iliac region and the right thigh respectively. Thus, most cases presented with AJCC stage IV at the time of diagnosis (54.5%;12/22), followed by stage IIIa (22.7%; 5/22), stage I (13.6%; 3/22), and stage II (0.9%; 2/22). One patient with T-ALL post remission showed ASPS in the left paravertebral location, post remission. The sibling of the patient also had T-ALL and developed glioblastoma 4 years post remission. The patient was further evaluated and diagnosed with constitutional mismatch repair deficiency syndrome (CMMRD) with a homozygous deletion (chr7:6026910; delC) detected in exon 11 of the PMS2 gene. Microscopically, all cases showed a multilobular architecture separated by fibrotic bands. The most predominant architecture pattern of the tumor cells within the lobules was the organoid pattern (81.8%;18/22) followed by the alveolar pattern (n=4) encircled by sinusoidal capillary vasculature (Figure 2A,B). The size of the nests was variable with the number of cells in one nest varying from 10 cells to as many as 200 cells. Focal solid areas without any intervening vasculature were seen in 3 cases ( Figure 2C). Thick fibrotic bands were seen in 50% (n=11) of the cases ( Figure 2D). The rare architectural features noted were infiltration of single cells in septa and focal spindling of tumor cells in 3 cases each ( Figure 2E). Cytologically, tumor cells were epithelioid or polygonal with abundant eosinophilic granular cytoplasm in 91% of the cases and predominantly clear cytoplasm in 2 cases. It was also noted that the cytoplasm was more condensed near the nucleus and clearing towards the edge of the cell. The classically described round to oval nuclei with vesicular chromatin and prominent eosinophilic nucleoli with anisonucleosis was a major feature (>50% of the tumor nuclei) in only 31.8 % (7/22) of the cases ( Figure 2F) while in 68.2% (15/22) of the cases the majority of the nuclei showed wrinkling and a concave nuclear contour without nucleoli, described as apple bite nuclei ( Figure 3A). Rare nuclear features included binucleation (n=13), multinucleation (n=8), pleomorphism (n=4), and nuclear grooves in three cases and intranuclear inclusion in one case ( Figure 3B-F). Mitotic activity in general in ASPS is rare with only 5 cases showing occasional mitoses. Necrosis was infrequent and focally seen only in 6 cases. A lymphovascular embolus was a common phenomenon seen in 50% of the cases. None of our cases showed perineural invasion. Intratumoral hemorrhage in the center of the nests was seen in 2 cases. Figure 3H) and consistent negativity for AE1/AE3, EMA, vimentin, HMB45, PAX8, MyoD1, SMA, synaptophysin, and chromogranin. Only two cases showed focal S100 positivity while one showed focal desmin positivity ( Figure 3I). Histomorphological and IHC details are given in Table II. All except one patient with localized disease of stage I-III were treated with surgical resection with clear margins, with no evidence of disease on follow-up. Many cells with PAS-D positive rod-like crystalline structures in a sheaf-like or stacked configuration in the cytoplasm were seen in 5 cases while this was seen in occasional cells in 3 cases ( Figure 3G). There was no significant inflammatory host response in any case. There were focal intratumoral lymphocytes in 2 cases but peritumoral lymphocytes were seen in only one case. Other inflammatory cells such as plasma cells, granulocytes and histiocytes were absent. IHC was performed in all cases to rule out paraganglioma, PEComa, granular cell tumor, metastatic carcinoma such as renal cell carcinoma, hepatocellular carcinoma, or adreno- Table II continue primary, a phenomenon also encountered by other authors (2,12). One of our cases with the primary in the forearm also presented with metastasis in the nasal cavity which is not reported as the site of metastasis in any of the large series, though rare cases of primary sinonasal ASPS have been reported (1,3,7,(9)(10)(11)(12). Metastases to the lymph nodes are uncommon and were seen in only 2 cases in the present cohort. Portera et al. reported lymph node metastasis in a single patient only out of 70 cases (7). Our study had the first reported case of ASPS in patients with CMMRD (13). CMMRD is a childhood cancer predisposition syndrome caused by biallelic pathogenic variants in one of four mismatch repair (MMR) genes, i.e., MLH1, MSH2, MSH6 and PMS2. It is classically associated with hematological, brain, and intestinal malignancies but rare in sarcoma. Only 30 MMR deficient bone and soft tissue sarcomas including 3 ASPS were encountered in the literature (13,14). ASPS metastasis to the breast is considered extremely rare and is reported only in a handful of cases but was seen in one of our cases (15). ASPS is known to have a very classical histomorphology showing very little variation from case to case and site to site. However, the diagnosis is challenging because of morphological overlap with other tumors, particularly on small biopsies and uncommon sites of occurrence or evaluation of metastatic site prior to identification of primary such as in biopsies from the posterior fossa, bone, or nasal cavity in the present series. Difficulties are further confounded by the occurrence of rare morphologic features particularly in biopsies such as solid pattern, clear cytoplasm, and unusual nuclear features. With regard to the pattern, the tumor always had a lobular architecture with variably thick fibrous septae separating the lobules. We noted a significant preponderance of a 'non-alveolar' organoid growth pattern over the alveolar pattern, despite the name of the entity. This needs to be kept in mind, particularly when looking at a small biopsy. Focal clear cytoplasm seen in two of our cases as a dominant feature raises the possibility for these cases to be confused with other clear cell tumors. The cells were also found to have a feathery kind of cytoplasm with condensation of the cytoplasm around the nucleus with pale cytoplasm at the periphery giving a lacy skirt kind of appearance. Most of the studies in the literature including WHO 2013 and the latest WHO 2020 classification of tumors of soft tissue have emphasized vesicular nuclei with prominent eosinophilic nucleoli as a characteristic feature of ASPS but it was not the most prominent finding in the present series Response was noted in 5 cases with tumor size <5 cm while only 3 cases out of 16 cases with a diameter >5 cm showed no evidence of disease on follow up. It was not affected by site in our study. A single patient of ASPS of the lung with stage I was treated with chemotherapy and radiotherapy with no response, rather progression of disease with metachronous metastasis to the liver and bone. All patients with disseminated disease i.e. stage IV were treated with anthracycline based chemotherapy and radiotherapy of 10 cycles of 30 gray. On regular follow-up, radiologically, no response but rather progression of disease was seen with increase in the size of the tumor at the primary site as well as an increase in the size of the metastasis. Out of 4 paediatric patients (age <17 years), a response was only noted in one case each of stage I and stage II cancers while another two of stage IV disease showed no response. No hospital death was reported in any patient. All were alive with disease in the limited period of follow-up ranging from 4 to 108 months. DISCUSSION ASPSs are rare soft tissue tumors that constitute < 1% of all soft tissue sarcomas (1-3). The present study mirrors similar findings with only 0.4% the cases of ASPS out of all soft tissue sarcomas diagnosed over a period of 10 years. Studies have established that ASPS affects more commonly young adults; concordantly the age range in the present cohort was 2-47 years with four pediatric patients (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16). The literature has a well documented female to male predominance before the age of 30 years, with a reversed ratio for older ages (1-7). Our study also corroborates these findings with the M:F ratio being 0.6:1 in patients less than 30 years while all three patients of age >30 years were male. However, Rekhi et. al. reported a male preponderance in their study (3). The prominent predilection for the extremities in our series is also well reflected in earlier studies (1)(2)(3)7,10). A rare site seen in the present series was primary pulmonary ASPS in a 25-year male without evidence of soft tissue tumor elsewhere at the time of initial diagnosis confirmed by the PET scan. To the best of our knowledge, only three cases of primary pulmonary ASPS have been reported in the English literature till date (8). The clinical course in our series illustrates the high incidence of metastatic disease at the time of diagnosis with 50% of the cases. Many studies have reported metastatic disease at diagnosis in 55% to 65% of the patients (4,7). The most common metastatic site was the lung while brain metastasis was always a part of disseminated metastasis and never occurred in isolation, a phenomenon also observed by Portera et al. and Keyton et al (7,10). In our study, in two cases, metastases was detected prior to the finding of a (2,3). Cathepsin K immunoexpression is non-specific and has been reported in renal cell tumors, granular cell tumors, as well as numerous additional sarcomas including Kaposi sarcoma, liposarcoma, chondrosarcoma, undifferentiated pleomorphic sarcoma, and leiomyosarcoma (21). Granular cell tumors are diffusely immunopositive for S100, SOX10 and inhibin, which are negative in ASPS. There was focal weak S100 positivity in one of our tumors. Cytoplasmic granules can be also seen in granular cell tumor but PAS-positive diastase-resistant rodlike/rhomboid crystalline inclusions seen in 36.4% of the cases in the present series are specific for ASPS, and can be highlighted with MCT1 and CD147 immunostains while cytoplasmic granules in granular cell tumor are CD68 positive (1,3). Though TFE3 positivity have been reported in paraganglioma but immunopositivity for neuroendocrine markers, with S100 highlighting sustentacular cells, helps differentiate them from ASPS (3). PAX8, pan cytokeratin, CD10 negativity helps in ruling out renal cell carcinoma which is further substantiated by the absence of a renal mass on radiology. Negative immunostaining for vimentin and Melan-A ruled out an adrenocortical carcinoma. S100-P, HMB45, and Melan-A negativity in tumor cells ruled out a melanoma. Focal desmin positivity was seen in two of our cases but the lack of nuclear positivity for MyoD1 and myogenin ruled out a rhabdomyosarcoma. PEComa is differentiated from ASPS due to its reactivity for HMB45 but recently aberrant expression of HMB45 was also reported in ASPS, though both tumors are TFE3 rearranged, diagnosis of ASPS was favored based on presence of PAS-D needle crystals in ASPS (21). Translocation analysis can be performed, when necessary, and is the diagnostic 'gold' standard but one should be aware of other TFE3 rearranged tumors while interpreting the results (3,21). None of our cases showed extensive mitosis or necrosis which are considered classical features of high-grade sarcoma. Despite thatbiological behavior of ASPS is aggressive, hence FNCLCC Histological Grading System isn't used for them, all ASPS by definition are considered high grade (1). (1-12). The dominant nuclear feature (>50% of tumor nuclei) were bland nuclei with marked nuclear folding leading to concave, apple bite, and crenated nuclei without any nucleoli in nearly 68.2% of the cases and these nuclei were focal in the rest of the patients. These features were first observed by Fanburg-Smith et al. and Chatura et al. in lingual ASPS but it was an universal finding in the present series, independent of site (12,16). We also observed focal nuclear grooves in 3 cases which are not documented in the literature. Intranuclear inclusion was seen in one case and also observed in two cases by Rekhi et al (17). Awareness of these nuclear features is important and should not deviate one from the diagnosis of ASPS due to the absence of classical vesicular nuclei, particularly in small biopsies. The exact molecular pathogenetic relation between specific cellular-level structural features and cancer genes is not known. Nucleolar enlargement classically is associated with increased ribosome production, and production of new ribosomes appears essential for cell-cycle progression. Nuclear envelope irregularity may be the effect of downstream signaling pathway of the aberrant transcription factor ASPSCR1-TFE3 altering the structure of the nuclear membrane (18,19). Other rare features such as multinucleation and pleomorphism have been observed in other studies also but with no prognostic significance (2,3,12). Focal mucinous and cystic change reported in the literature was not seen in any of our cases. Based on morphology, the differential diagnoses considered in the present study were paraganglioma, granular cell tumor, metastatic renal cell carcinoma, adrenocortical carcinoma, hepatocellular carcinoma, rhabdomyosarcoma, PEComa, and melanoma. Previously there was no specific marker for diagnosis of ASPS but the discovery of an unbalanced t(X::17) resulting in a fusion of the ASPL gene on chromosome 17 to the TFE3 gene on chromosome X changed this scenario (1,5). Recently, novel HNRNPH3-TFE3, DVL2-TFE3, and PRCC-TFE3 fusions have also been identified (6). Thus, immunodetection of the C terminus of the TFE3 protein in ASPS was considered a diagnostic landmark, but it should be interpreted carefully since the list of tumors with TFE3 immunopositivity is increasing. Cathepsin K is a cysteine protease abundantly expressed by osteoclasts and its expression is driven by microphthalmia transcription factor (MITF). TFE3 also belongs to the same transcription factor subfamily as MITF. It is hypothesized that the TFE3 fusion proteins function like MITF in the neoplasms, and thus activate cathepsin K expression which can be detected by IHC (20). The management of ASPS typically involves surgical resection for localized disease, which was performed in 8 cases and was curative. Anthracycline-based chemotherapy with or without radiotherapy was given for disseminated tumors with metastases in 10 cases and for localized disease in one case. It was largely ineffective with no response in any case and rather progression of disease was noted in all cases in present series. A search for novel therapies and their evaluation is being done in clinical trials. Molecular targeted treatment has been increasingly utilized. Vascular endothelial growth factor receptor-targeted TKIs such as pazopanib, crizotinib, sorafenib, anlotinib, sunitinib, and cedirranib and MET kinase inhibitors have been explored in clinical trials for metastatic disease with promising results (5,22). We argue against the future of immunotherapy in ASPS since a very focal intratumoral inflammatory host response was seen in only two cases and only one case showed minimal lymphocytic response at the tumor edge. CONCLUSION ASPS has a morphological and immunohistochemical overlap with many mesenchymal and non-mesenchymal tumors. Diffuse strong nuclear TFE3 positivity is sensitive for ASPS in an appropriate clinicoradiological context. Awareness of TFE3 positivity in other tumors is vital. It is imperative to employ a panel of markers in order to identify an alveolar soft part sarcoma from its differential diagnoses. Due to the high propensity for early metastasis even at the time of presentation in ASPS, complete metastatic workup and long term follow up is recommended. ASPS is associated with slow progression and resistance to conventional cytotoxic chemotherapy. Ethics Approval and Consent to Participate The study has been approved by the institute research committee of GCRI assuring legal and ethical criteria fulfilment in the study with review number IRC/2022/P-79. Funding Authors received no financial support for the research, authorship and/or publication of this manuscript
2023-05-24T06:17:49.700Z
2023-05-23T00:00:00.000
{ "year": 2024, "sha1": "58844f611f0a9eb0ad5da418bf39a067605b74bc", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5146/tjpath.2023.01605", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "42479b19bcee3fafa4751a3972748053856c26e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13726524
pes2o/s2orc
v3-fos-license
Secretory phospholipase A2 in the pathogenesis of acute dengue infection Abstract Introduction Platelet activating factor (PAF) is an important mediator of vascular leak in acute dengue. Phospholipase A2s (PLA2) are inflammatory lipid enzymes that generate and regulate PAF and other mediators associated with mast cells. We sought to investigate if mast cell activation and increases in secretory sPLA2s are associated with an increase in PAF and occurrence of dengue haemorrhagic fever (DHF). Methods The changes in the levels of mast cell tryptase, PAF and the activity of sPLA2 were determined throughout the course of illness in 13 adult patients with DHF, and 30 patients with dengue fever (DF). Results We found that sPLA2 activity was significantly higher in patients with DHF when compared to those with DF, during the first 120 h of clinical illness. sPLA2 activity was significantly associated with PAF levels, which were also significantly higher in patients with DHF. Although levels of mast cell tryptase were higher in patients with DHF, the difference was not significant, and the levels were not above the reference ranges. sPLA2 activity significantly correlated with the degree of viraemia in patients with DHF but not in those with DF. Conclusion sPLA2 appears to play an important role in the pathogenesis of dengue. Since its activity is significantly increased during the early phase of infection in patients with DHF, this suggests that understanding the underlying mechanisms may provide opportunities for early intervention. Introduction Dengue is one of the most important mosquito borne virus infections in the world, affecting approximately 390 million individuals annually [1]. It has been estimated that in 2013, 58.4 million individuals developed symptomatic dengue virus infection, of whom 18% required hospitalization [2]. Although mortality rates have improved due to better management, dengue is associated with significant morbidity with an annual cost estimated to be US$ 8.9 billion [2]. Typical dengue is characterised by fever, myalgia, arthralgia and gastrointestinal symptoms such as abdominal pain and vomiting [3]. In some patients, this initial febrile or viraemic phase is followed by a critical phase, which is associated with increased vascular permeability [4]. This is evident by a rise in the haematocrit, reduced pulse pressure, pleural effusions, ascites and shock [5]. The critical phase typically lasts for 24-48 h after which most patient proceed to recovery. Many inflammatory mediators have been implicated in the vascular leak seen in acute dengue [6][7][8]. In an earlier study, we found that levels of platelet activating factor (PAF) were significantly higher in patients with dengue haemorrhagic fever (DHF) when compared to those with dengue fever (DF), especially during the critical phase [9]. In endothelial cells, the sera obtained from patients with acute dengue caused a reduction of the trans-endothelial resistance and tight junction protein expression, both of which were significantly inhibited by PAF receptor blocker [9]. Since these experiments confirmed that PAF was likely to be an important mediator of vascular leak, we sought to identify mechanisms known to regulate PAF, in order to find new therapeutic targets, in the treatment of acute dengue. PAF is an inflammatory lipid mediator that is produced by mast cells, monocytes, macrophages, neutrophils, endothelial cells and platelets [10,11]. Phospholipase A2 and acetyltransferase are required for its synthesis [12]. Phospholipase A2s constitute a group of inflammatory lipid enzymes that act on cellular phospholipids to generate free fatty acids (e.g. PAF) and lysophospholipids and are known to enhance the systemic inflammatory response [12,13]. Both secretory phospholipase A2s (sPLA 2 ) and cytoplasmic phospholipase A2 are known to generate and regulate PAF and both are produced by mast cells, endothelial cells, hepatocytes, monocytes and many types of epithelial cells [14] and are degraded following binding with specific binding proteins [12,13,15]. Endothelial cells are also known to produce PAF and it has been shown that mast cell products such as vascular endothelial growth factor (VEGF) induces production of PAF through the action of sPLA2s [16]. Since VEGF has been found to be elevated especially in patients with DHF and to associate with vascular leak [17,18], VEGF could be inducing PAF through activation of sPLA2s. The activity of both cPLA2 and sPLA2 are also induced by inflammatory cytokines such as TNFa, IL-1b and IL-6 [13,19]. Since these cytokines are known to be highly elevated, especially in patients with DHF and follow the same patterns of production as PAF [20], they could also be inducing the activity of the lipid enzymes and subsequent PAF production. Mast cells are activated occurs in acute dengue and mast cell mediators have been associated with vascular leak in mouse models of this disease [8,18,21]. Mast cell products such as VEGF, mast cell tryptase and chymase levels have been shown to be significantly higher in patients with more severe forms of dengue and VEGF could be acting through induction of sPLA2s to simulate production of PAF [17]. In vitro studies have shown that mast cells are directly infected with the dengue virus (DENV) [22]. Antibody dependant enhancement (ADE) has shown to significantly increase the release factors that lead to endothelial activation in in vitro studies [23] and in dengue mouse models [8]. DENV infection of mast cells is potentiated by ADE and is facilitated by autophagy [24]. In addition, pre-existing DENV specific antibodies contribute to disease severity by inducing mast cell degranulation through Fcg receptors [8]. Many studies have shown that mast cell mediator production is increased in patients with more severe forms of disease and DENVspecific antibodies contribute to disease pathogenesis by ADE and by inducing mast cell degranulation. By contrast, some studies have shown that mast cells might be protective in acute dengue. For instance, a recent study showed that DENV infection in mice deficient in mast cells had higher viral loads, prolonged viraemia and increased bleeding [25]. These disparate findings could be reconciled if mast cell activation is important for controlling DENV replication but also contributes to immune mediated pathology. Since mast cells are an important source of PAF and mast cell activation and degranulation have been shown to contribute to vascular leak and disease severity, we investigated the relationship between PAF and mast cell tryptase during the course of illness in patients with DHF and DF and also in those with primary and secondary dengue. In addition, since PLA2 is known to regulate PAF and since mast cells are also a known source of certain subgroups of this lipid enzyme, we also evaluated the kinetics of sPLA 2 activity in these patients. We found that during the first five days (120 h) of clinical illness, serum sPLA 2 activity was significantly higher in patients with DHF, when compared to those with DF. Although, mast cell tryptase levels were higher in patients with DHF, this increase was not statistically significant. In addition, both the activity of sPLA 2 and mast cell tryptase levels were significantly associated with the degree of viraemia in patients with DHF. Therefore, events that occur in early in DENV infection are likely to contribute to subsequent disease severity, and this suggests that these could be a focus of therapeutic targeting. Methods We enrolled 43 adult patients with clinical features of dengue, admitted to a general medical ward in a tertiary care hospital (Colombo South Teaching Hospital) in Colombo during the year 2015. Following informed written consent, serial blood samples were obtained in the morning (5.00 a.m.) and afternoon (3.00 p.m.), from the time of admission to the time of discharge from hospital, throughout the course of their illness. Only those whose duration from onset of illness was 5 days were recruited. The day in which the patient first developed fever was considered as day one of illness. Those who were known to have chronic liver disease, chronic renal disease, pregnant individuals, those on a steroid dose of > 40 mg/day for over seven days were excluded from the study. Clinical features including fever, blood pressure, pulse pressure and the urine output were measured at least four times a day. The severity of acute dengue was classified according to the 2011 WHO dengue guidelines [3]. Accordingly, patients who had a rise in haematocrit above 20% of the baseline haematocrit or detectable fluid in the pleural or abdominal cavities by ultrasound scanning, were classified as having DHF. Shock was defined as having cold clammy skin, along with a narrowing of pulse pressure of 20 mmHg. Based on the 2011 WHO guidelines, 30 patients had DF and 13 had DHF. As patients with DHF were in hospital for a longer time than those with DF, approximately five to nine serial blood samples were collected from this group, compared to four to five samples from those with DF. Ethics statement The ethical approval was granted from the Ethical Review Committee of the University of Sri Jayawardanapura. Quantitative PAF and mast cell tryptase assays Levels of PAF and mast cell tryptase were determined in all serial blood samples which were obtained twice a day. All the assays were done in duplicate. The PAF (Cusabio, Wuhan, China) and mast cell tryptase assays (Abbexa, Cambridge, UK) were carried out using a quantitative ELISA according to manufacturers' instructions. These assays have been used previously to quantify levels of serum PAF [9] and mast cell tryptase. Assays for sPLA 2 activity in serum The activity of sPLA 2 in patients with dengue infections were measured using a commercial sPLA2 kit (Abcam, Cambridge, UK), according to manufacturer's instructions. Briefly in a flat-bottom 96-well plate, 10 ml DTNB and 15 ml of the assay buffer were added to the blank wells and 10 ml DTNB, 10 ml Bee Venom PLA2, and 5 ml assay buffer added to the positive control wells. Ten micro litres DTNB, and 5 ml assay buffer along with 10 ml of the serum sample were added to each sample well. The reaction was initiated by adding 200 ml Substrate Solution to each well. All the samples were tested in duplicate. The absorbance was read every min at 414 (or 405) nm using an ELISA plate reader to obtain at least five time points. The reaction rate was determined using a DTNB extinction coefficient of 10.66 mM-1 according to the recommended formula for calculating the sPLA2 activity. Determining viral loads in serial blood samples RNA was extracted from all serial serum samples using QIAamp Viral RNA Mini Kit (Qiagen, Valencia, CA) according to the manufacturer's protocol. The RNA was reverse transcribed into cDNA in GeneAmp PCR system 9,700 using High Capacity cDNA reverse transcription kit (Applied Biosystems, Foster City, CA) according to the manufacturer's instructions. Reaction conditions were 10 min at 258C, 120 min at 378C, 5 min at 858C and final hold at 48C. Multiplex quantitative real-time PCR was performed as previously described using the CDC real time PCR assay for detection of the DENV [26]. This assay was modified to quantify the DENV apart from quantitative analysis. Oligonucleotide primers and a dual labelled probe for DEN 1,2,3 and 4 serotypes were used (Life technologies, Bengaluru, India) based on published sequences [26]. The probe was dual-labelled with the probe and the QSY quencher. The DENV serotype specific primers were labelled as follows: DENV-1 with JUN, DENV-2 with ABY, DENV-3 with FAM, DENV-4 with VIC. The reactions consisted of 20 ml volumes containing the following reagents, TaqMan multiplex master mix (containing mustag dye), 900 nM of each primer, 250 nM of each probe, 2 ml of cDNA and nuclease free water (Applied Biosystems, USA). The reaction was performed in an Applied Biosystems7500, 96-well plate detection system. Following initial denaturation for 20 s at 958C, the reaction was carried out for 40 cycles of 3 s each at 958C and 30 s at 608C.The threshold cycle value (Ct) for each reaction was determined by manually setting the threshold limit. Virus quantification (pfu/mL) of unknown samples was performed using the standard curve. Generating standard curves for quantifying the DENV To generate standard curves, the four DENVs were grown in C6/36 cell lines supplemented with L15 media at 288C. Virus supernatants were harvested seven following infection and immediately used to infect vero81 cell lines. To determine the infective virus particles by plaque assays, virus culture supernatants were serially diluted and inoculated in triplicate. Undiluted sample from the virus culture supernatant was used as the positive control and culture media as the negative control. After 5 days of incubation at 378C in 5% CO 2 incubator, the plaques were developed and counted. Virus concentrations were calculated as pfu/mL. Following quantification of the viruses, the standard curves were generated as serial dilutions from 10 6 to 10 1 of pfu/mL for each virus serotype. In order to quantify the virus in clinical samples the unknowns were compared to known values in the standard curves of each virus. The viral loads were expressed as pfu/mL. Detection of dengue NS1 antigen and dengue specific antibodies Acute dengue infection was confirmed in the serum samples using the NS1 early dengue ELISA (Panbio, Brisbane, Australia). All assays were done in duplicate. Dengue antibody response was also confirmed in these patients with a commercial capture-IgM and IgG enzymelinked immunosorbent assay (ELISA) (Panbio). The ELISA was performed and the results were interpreted according to the manufacturers' instructions. This ELISA assay has been validated as both sensitive and specific for primary and secondary dengue virus infections [27,28]. Statistical analysis Statistical analysis was performed using Graph PRISM version 6. As the data were not normally distributed (as determined by the frequency distribution analysis of Graphpad PRISM), non-parametric tests were used in the statistical analysis. Differences in the serial values of sPLA2, mast cell tryptase, PAF and viral loads in patients with DHF and DF were done using multiple unpaired t tests. Corrections for multiple comparisons were done using Holm-Sidak method and the statistical significant value was set at 0.05 (alpha). The association between the PAF, sPLA2, mast cell tryptase and viral loads was done using Spearman correlation. Results Of the 43 patients with acute dengue, 30 were classified as having DF and 13 were classified as having DHF based on the WHO 2011 dengue guidelines [3]. None of the patients developed shock or severe bleeding manifestations and all of them eventually recovered. A total of 36/43 (80.7%) of patients were infected with DENV-1 while 1/43 (2.3%) were infected with DENV-4. Six of the patients were negative by quantitative PCR although their NS1 test was positive and dengue specific IgM and IgG were positive. The average duration of illness when obtaining the first blood sample was 93.7 (SD AE 18.9) h since the onset of illness. Kinetics of sPLA2 in the course of acute dengue infection As in our previous studies we found that PAF was significantly increased in patients with DHF when compared to those with DF and the PAF levels were highest during the critical phase of dengue [9], we proceeded to investigate the kinetics of sPLA2 and mast cell tryptase compared to the kinetics of PAF in acute dengue infection. Phospholipase A2s are a group of lipid enzymes, which act on cellular phospholipids to generate free fatty acids (e.g. PAF) and lysophospholipids [12]. Since PAF was found to be a cause of vascular leak in acute dengue, we determined the kinetics of sPLA2, to determine if the changes in sPLA2 activity were similar to the changes in PAF. We found that sPLA2 activity was significantly increased in patients with DHF during the early phases of illness and then tends to reduce around 132-144 h of illness (Fig. 1A). sPLA2 activity was significantly higher in those with DHF when compared to those with DF during the first 120 h of illness. Although sPLA2 levels were still higher in patients with DHF when compared to those with DF beyond 120 h of illness, this was not significant. The kinetics of PAF were similar to our previous observations and the levels of PAF were significantly higher at 96 (P ¼ 0.005) and 144 h of illness (P ¼ 0.01) in patients with DHF when compared to those with DF (Fig. 1B). Similar to what we observed previously [9], the PAF levels showed a biphasic pattern with the levels been highest in the mornings and decreased to almost 0 undetectable by the afternoon. The levels of PAF gradually declined in patients with DF after 120 h of illness, whereas high levels of PAF were seen up to 168 h of illness in patients with DHF. sPLA2 activity significantly correlated with PAF levels in patients with acute dengue infection (Spearmans' r ¼ 0.25, P ¼ 0.0006) (Fig. 1C). Activity of sPLA2 also significantly correlated with viral loads (Spearmans' r ¼ 0.19, P ¼ 0.02) and inversely correlated with lymphocyte numbers (Spearmans' r ¼ À0.15, P ¼ 0.04). On further analysis, the association of viral loads with serum sPLA2 activity was significant in patients with DHF (Spearmans r ¼ 0.43, P ¼ 0.001), but not in those with DF (Spearmans r ¼ À0.22, P ¼ 0.14). We did not observe any relationship between the viral loads with PAF in either patients with DF or with DHF. Serum sPLA2 activity also associated with liver transaminases as it significantly correlated with levels of serum aspartate transaminase levels (Spearmans r ¼ 0.23, P ¼ 0.005) and with serum alanine transaminase levels (Spearmans r ¼ 0.23, P ¼ 0.03). Kinetics of mast cell tryptase in acute dengue Mast cells have recently been shown to be important in the pathogenesis of dengue and certain mast cell mediators have been associated with vascular leak [22,29]. Amongst other cells, mast cells are an important source PAF and some groups of sPLA2s have been shown to be important in maturation of mast cells [30,31]. Since we found that both sPLA2 activity and PAF levels were higher in DHF, we investigated if this increase was associated with activation of mast cells. As tryptase is highly specific for mast cells [11], we proceeded to determine if mast cell degranulation, as indicated by release of tryptase is associated with high levels of PAF and increased activity of sPLA2. During the 108 h of illness, mast cell tryptase levels were found to be increased in patients with DHF when compared to those with DF, although this was not significant at any time point (Fig. 2A). The normal ranges for mast cell tryptase in healthy individuals is considered to be < 11.5 ng/mL (11,500 pg/mL) [32]. None of the patients with DHF had mast cell tryptase levels higher than 11.5 ng/mL at any time point during illness. In addition, mast cell tryptase did not correlate with activity of sPLA2 (Spearman' r ¼ 0.06, P ¼ 0.4) or with levels of PAF (Spearmans r ¼ À0.01, P ¼ 0.87). However, mast cell tryptase was associated with the degree of viremia (Spearman's r ¼ 0.31, P ¼ 0.0002). This association of mast cell tryptase with the degree of viraemia was significant in those with DHF (Spearmans r ¼ 0.55, P< 0.0001) (Fig. 2B), whereas no such association was seen in those with DF (Spearmans r ¼ 0.05, P ¼ 0.62). Mast cell tryptase also did not associate with any of the laboratory disease severity markers such as liver enzymes, lymphocyte or platelet counts. PAF, sPLA2 and mast cell tryptase in those with primary and secondary dengue infection DHF is more common is secondary dengue infection, and this is thought to be due to enhancement of infection in Fc receptor bearing cells, caused by DENV-specific poorlyneutralising, highly cross-reactive antibodies [33,34]. Preexisting DENV-specific antibodies have also been shown to increase mast cell degranulation in both in vitro and in dengue mouse models [8]. Therefore, we determined the changes in the kinetics of mast cell tryptase, activity of sPLA2 and levels of PAF in patients with primary and secondary dengue. Of the 14 patients had primary dengue and 29 patients had a secondary dengue. Of the 13 patients with DHF only four had a primary dengue. Therefore, of the 14 patients with primary dengue, four developed DHF and 10 developed DF. Of the 29 patients with secondary dengue, nine developed DHF and 20 developed DF. We did not observe significant differences in the kinetics of mast cell tryptase, sPLA2 or PAF in those with primary or secondary dengue (Fig. 3). Discussion In this study we investigated the changes in the activity of sPLA2 and levels of PAF and mast cell tryptase throughout the course of illness in patients with varying severity of dengue infection. We found that serum sPLA2 activity was markedly increased and significantly higher in patients with DHF, during the first 120 h from the onset of illness. Levels of PAF were also significantly higher in patients with DHF at 96 and 144 since the onset of illness. Although, the mast cell tryptase levels were also higher in patients with DHF, this was not significant at any time point since the onset of illness. Phospholipase A2s are a group of lipid enzymes, which act on cellular phospholipids to generate free fatty acids and lysophospholipids and also generate PAF [12]. Over one third of the phospholipases are of the secretory form (sPLA2), which has 10 isoforms [14]. Among the main isoforms sPLA2, group IIA is the main isoform detected in serum and its activity has been shown to be induced by many proinflammatory stimuli including bacterial products [14,35]. Due to its high affinity for bacterial membrane lipids, it effectively degrades many bacteria. However, many inflammatory lipid mediators and arachidonic acid metabolites are also generated during the activity of sPLA2 and therefore, the group IIA sPLA2s are also known as inflammatory phospholipase enzymes with wide ranging effects [13,14,36,37]. We found that in patients with acute dengue, the sPLA2 activity was highest very early in the course of infection and the levels were significantly higher in those with DHF. The activity of sPLA2 diminished towards 120-132 h of illness and from this time onwards, there was no difference between the activity of sPLA2 in patients with DHF and those with DF. Interestingly, the degree of viraemia significantly correlated with the sPLA2 activity in those with DHF but not in those with DF. Since in acute dengue infection the viraemia also tends to decline with time and become undetectable and very low at around 120-132 h, it is possible that the virus itself or certain viral proteins such as NS1 could be inducing the activity of sPLA2. Recently it was shown that dengue NS1 has a LPS like activity and acts through TLR4 receptors to induce production of inflammatory cytokines from monocytes and macrophages [38,39]. NS1 was also shown to induce vascular leak in both in vitro and in dengue mouse models [38,40]. The mechanisms by which dengue NS1 induces vascular leak are not well studied [40] but may involve complement mediated effects. However, NS1 possibly could induce sPLA2, thereby regulating PAF, which in turn induces vascular leak. Alternatively PAF may influence production of sPLA2 [35]. In our previous studies [9], significantly higher PAF levels were detected in patients with DHF and the dengue NS1 antigen was also shown to persist for a longer duration at higher levels [39,41]. Therefore, whether NS1 plays a role in inducing sPLA2 or PAF should be further investigated. Activation of mast cells directly by the DENV and by DENV-specific antibodies has been shown to occur in both in vitro and in dengue mouse models [8,21,23,24]. Mast cell products such as VEGF and chymase have been shown to be significantly elevated in patients with more severe forms of dengue infection [17,21]. Leukotriene receptor antagonists were found to significantly reduce the vascular leak in dengue mouse models [21]. Since the accumulating evidence of the importance of mast cells in the pathogenesis of acute dengue infection, we investigated the changes of another mast cell specific protease, mast cell tryptase in patients with acute dengue. Although, the levels of mast cell tryptase were higher in patients with DHF, the levels were not significantly higher than patients with DF at any time point. In addition, although the mast cell tryptase levels were higher, they were not higher than the normal ranges observed in healthy individuals at any time point during acute dengue infection. As with the relationship with the activity of sPLA2 and the degree of viraemia, we found a significant correlation with the degree of viraemia and mast cell tryptase levels in patients with DHF, but not in those with DF. Although, St John et al. found that serum chymase levels inversely correlated with the viral loads in patients with DHF [21], this difference is possibly due to the fact that they studied the chymase levels and viral loads at one time point during illness, whereas our observations are of correlation between viral loads and tryptase throughout the course of illness. DENV specific antibodies have shown to induce activation of mast cells by facilitating autophagy and also by inducing mast cell degranulation through the FcRg receptors [8,24]. Therefore, activation and degranulation of mast cells due to pre-existing DENV specific antibodies is thought to be another mechanism by which DENV-specific antibodies contribute to severe clinical disease in secondary dengue infections [8]. However, since none of the mast cell proteases such as tryptase or chymase themselves induce increased vascular permeability from endothelial cell lines, we determined the changes in the levels of mast cell tryptase, PAF and serum sPLA2 activity in patients with primary and secondary dengue. We did not find any difference in either levels of PAF or mast cell tryptase or in sPLA2 activity in those with primary and secondary dengue, during the course of illness. Although risk of DHF is known to be higher in those with secondary dengue infection [42][43][44], many individuals with primary dengue infection too have had DHF [45][46][47], which can be associated with fatalities. In fact some studies have shown that the likelihood of developing clinically inapparent dengue infection was similar during primary and secondary dengue infections [48]. Therefore, the presence of pre-existing DENV specific antibodies alone does not appear to solely explain severe secondary clinical disease, and other modifying factors are important. Since we found that PAF was an important mediator of vascular leak and since PLA2 activity induces production of PAF and other inflammatory mediators such as arachidonic acid metabolites including leukotrienes, it would be crucial to understand the pathways that lead to increased activity of sPLA2. In summary, we found that the serum sPLA2 activity was significantly increased in patients with DHF and that sPLA2 activity was also associated with other disease severity markers such as liver transaminases. As previously observed, levels of PAF were significantly higher in patients with DHF and serum sPLA2 activity was found to correlate with PAF. However, since mast cell tryptase levels were not significantly higher in patients with DHF nor in those with secondary dengue, this suggests that non-mast cell sources of PAF may also be important. Since sPLA2 activity was highest during early phase of clinical disease and since it significantly associated with the degree of viraemia, it would be important to understand the triggers that increase sPLA2 activity in order to develop new therapeutic targets.
2018-04-03T00:56:04.739Z
2016-12-11T00:00:00.000
{ "year": 2016, "sha1": "ae9162fcac536f312da923a3998392cbac9e28cd", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/iid3.135", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae9162fcac536f312da923a3998392cbac9e28cd", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
22280961
pes2o/s2orc
v3-fos-license
Active-site solvent replenishment observed during human carbonic anhydrase II catalysis Ultrahigh-resolution crystallographic structures of human carbonic anhydrase II (hCA II) cryocooled under CO2 pressures of 7.0 and 2.5 atm are presented. The structures reveal new intermediate solvent states of hCA II that provide crystallographic snapshots during restoration of the proton-transfer water network in the active site. Introduction The reversible interconversion of carbon dioxide (CO 2 ) and water to bicarbonate and a proton (H + ) occurs at a rate that is limited by the diffusion of substrates in the presence of carbonic anhydrases (CAs) as the catalyst (Davenport, 1984;Christianson & Fierke, 1996;Chegwidden & Carter, 2000;Frost & McKenna, 2013;Supuran & De Simone, 2015). CAs are metalloenzymes that mostly contain zinc, although some are found with iron or cadmium. There are six distinct families of CA (, , , , and the family, which was recently subdivided from the family) that are found throughout the animal, plant and bacterial kingdoms. In animals, CAs primarily function to maintain acid-base balance in the blood and other tissues, and to help the transport of CO 2 out of tissues. In particular, mammalian CAs belong to the family and are expressed as many different isozymes (Hewett-Emmett & Tashian, 1996). For instance, 14 forms of human -CA can be divided into four cytosolic (I, II, III and VII), two mitochondrial (V A and V B ), one secreted (VI) and four membrane-bound (IV, IX, XII and XIV). The remaining three isoforms lack catalytic activity and are referred to as carbonic anhydrase-related proteins (CARPs). Among these isozymes, human CA II (hCA II) is expressed in most cell types, with involvement in many physiological processes (Krishnamurthy et al., 2008;Frost & McKenna, 2013;Supuran & De Simone, 2015). The first crystal structure of hCA II, known at the time as hCA C, was determined by Liljas and coworkers in 1972 and was further refined in 1988 (Liljas et al., 1972;Eriksson et al., 1988). These studies laid the foundation for understanding the mechanism of CA activity. In hCA II, the active-site zinc is located within an $15 Å deep cleft and is tetrahedrally coordinated by three histidine residues (His94, His96 and His119) and an OH À ion (Fig. 1). Furthermore, the active-site cavity subdivides into two distinct sides, formed by hydrophilic residues (e.g. Tyr7, Asn62, His64, Asn67, Thr199 and Thr200) and hydrophobic residues (e.g. Val121, Val143, Leu198, Thr199-CH 3 , Val207 and Trp209). The 'hydrophobic' side sequesters and positions the CO 2 for nucleophilic attack by OH À (Liang & Lipscomb, 1990). Mechanistically, the conversion of CO 2 to bicarbonate in the hydration direction takes place via the nucleophilic attack of CO 2 by the zinc-bound hydroxide (OH À ) (1). The subsequently generated bicarbonate is then displaced by a water molecule (W Zn ; Silverman & Lindskog, 1988) (1): The next step of catalysis is the transfer of a proton from W Zn to the bulk solvent, regenerating the zinc-bound OH À (2). Here, the tetrahedral coordination of W Zn to zinc causes polarization of the hydrogen-oxygen bond, making the O atom slightly more positive and thereby weakening the bond (Christianson & Fierke, 1996). The general base (B) for the proton transfer is likely to be mediated by ordered waters and His64 within the enzyme, where the hydrophilic side of the active site forms the hydrogen-bonded water network (W1, W2, W2 0 , W3a and W3b) that connects W Zn to His64. This hydrogen-bonded network is believed to act as a proton wire that reduces the work required to transfer a proton from W Zn to the bulk solvent for the regeneration of the zinc-bound OH À (2) Steiner et al., 1975;Cui & Karplus, 2003;Zheng et al., 2008;Fisher, Maupin et al., 2007;Silverman et al., 1979). Neutron studies have been utilized to observe the protonation states and orientation of water molecules in proteins (Langan et al., 2008). Such experiments have determined that the water network in hCA II is pH-dependent, with an unbranched wire between W Zn and His64 at physiological pH that is broken at high pH owing to a rearrangement of the hydrogen bonds of W1 (Budayova-Spano et al., 2006;Fisher et al., 2011). The side chain of His64 is oriented in two conformations, termed the 'in' (pointing towards the active site) and 'out' (pointing away from the active site) positions, that are suggested to facilitate the proton-shuttling process (Tu et al., 1989;Fisher et al., 2005;Nair & Christianson, 1991;Lindskog, 1997;Avvaru et al., 2010). Neutron structures have revealed that His64 is uncharged when occupying the 'in' position, priming the residue for the acceptance of a proton transferred from W Zn , regenerating the enzyme during catalysis (Fisher et al., 2010). Moreover, the fact that binding of small molecules (activators) in the vicinity of His64 changes the catalytic rate by altering the proton-transfer step adds to the hypothesis that proton shuttling occurs via His64 (Supuran, 2008;Temperini et al., 2005Temperini et al., , 2006aBriganti et al., 1997Briganti et al., , 1998 Surface rendition of hCA II depicted using the ultrahigh-resolution (0.9 Å ) crystal structure of 7.0 atm CO 2 hCA II. The hydrophobic and hydrophilic regions are coloured green and yellow, respectively. (a) The overall hCA II shows the CO 2 -binding active site that is located at a depth of 15 Å from the surface and is open to the outside bulk solvent through an entrance (the entrance diameter is $7-10 Å and it is referred to as the 'entrance conduit' in this study). The substrate and product (CO 2 / bicarbonate) of hCA II as well as water molecules can pass in and out through this open entrance conduit. (b) A closer view of the active site (Zn, CO 2 binding site and His64) is shown through the entrance conduit with a 2F o À F c map (in blue) contoured at 2.5 (for His64) and 5.0 (others). The isolated electron density of the C atom of CO 2 is clearly visible in this ultrahigh-resolution structure. The protein surface around His64 is removed and the proton-transfer water network (W I /W2/W3a/ W3b) is not shown for clarity. The proton transfer during the catalytic activity is thought to occur via His64 through the proton wire rather than through the open entrance conduit. Previously, the capture of CO 2 in the active site of hCA II was achieved by cryocooling hCA II crystals under a 15 atm (1 atm = 101.325 kPa) CO 2 pressure (Kim et al., 2005;Domsic et al., 2008). More recently, attempts have been made to track the intermediate changes during gradual CO 2 release to the CO 2 -free state by incubating 15 atm CO 2 -pressurized hCA II crystals at room temperature (RT) for different time intervals (50 s, 3 min, 10 min, 25 min and 1 h) to decrease the internal CO 2 pressure (Kim et al., 2016). The resulting so-called intermediate snapshots revealed that two deep waters (W DW and W 0 DW ) immediately replace the vacated space as CO 2 leaves the active site. In addition, W I (intermediate water), which is only observed in fully CO 2 -bound hCA II (Domsic et al., 2008), abruptly disappears, while W1 appears, as the CO 2 is released. Moreover, with CO 2 release W2 0 (an alternate position of W2) in close proximity to residue His64 was observed to gradually disappear, whereas His64 concurrently rotated from the 'out' to the 'in' rotameric conformation. Despite the structural changes observed, the rapid changes taking place with the crystal incubation method left some of the key questions unanswered, such as how the protontransfer water network is restored during hCA II catalytic activity. In this study, we present ultrahigh-resolution structures of hCA II from crystals cryocooled under CO 2 pressures of 7.0 atm (0.9 Å resolution) and 2.5 atm (1.0 Å resolution), which are hereafter referred to as '7.0 atm CO 2 hCA II' and '2.5 atm CO 2 hCA II', respectively. These two structures were compared with the three previous structures (Kim et al., 2016) of hCA II crystals cryocooled under 15 atm CO 2 pressure and then incubated at room temperature for 0 s (1.2 Å resolution structure from PDB entry 5dsi; hereafter referred to as '15 atm CO 2 hCA II'), 50 s (1.25 Å resolution structure from PDB entry 5dsj; hereafter referred to as '15 atm CO 2 hCA II -50s') and 1 h (1.45 Å resolution structure from PDB entry 5dsn; the 'CO 2 -free state' and hereafter referred to as '15 atm CO 2 hCA II -1h'). The structural comparison reveals that 7.0 atm CO 2 hCA II and 2.5 atm CO 2 hCA II are previously unknown intermediate states between 15 atm CO 2 hCA II and 15 atm CO 2 hCA II -50 s. Together, these studies provide a view of how hCA II utilizes a water reservoir to fill the void in the active site as CO 2 is released. Protein expression and purification The zinc-containing hCA II was expressed in a recombinant strain of Escherichia coli BL21 (DE3) pLysS transformed with a plasmid encoding the hCA II gene (Forsman et al., 1988). Purification was carried out using affinity chromatography as described previously (Khalifah et al., 1977). Briefly, bacterial cells were enzymatically lysed with hen egg-white lysozyme and the lysate was loaded onto agarose resin coupled with p-(aminomethyl)benzenesulfonamide, which binds to hCA II. The protein on the resin was eluted with 400 mM sodium azide in 100 mM Tris-HCl pH 7.0. The azide was removed by extensive buffer exchange against 10 mM Tris-HCl pH 8.0. Protein crystallization Crystals of hCA II were obtained using hanging-drop vapour diffusion (McPherson, 1982). A 10 ml drop consisting of equal volumes of protein solution (5 ml) and well solution (5 ml) was equilibrated against 1 ml well solution (1.3 M sodium citrate, 100 mM Tris-HCl pH 7.8) at room temperature ($20 C) (Domsic et al., 2008). Crystals grew to approximate dimensions of 0.1 Â 0.1 Â 0.3 mm in a few days. 2.3. CO 2 entrapment using pressure cryocooling CO 2 entrapment was carried out as described in previous reports (Domsic et al., 2008;Kim et al., 2016). The hCA II crystals were first soaked in a cryosolution consisting of the reservoir solution supplemented with 20%(v/v) glycerol. The crystals were then coated with mineral oil to prevent dehydration and loaded into the base of high-pressure tubes. Once in the pressure tubes, the crystals were pressurized with CO 2 gas to two different pressures (7.0 and 2.5 atm) at room temperature. After 10 min, the crystals were cryocooled to liquid-nitrogen temperature (77 K) without releasing the CO 2 gas. Once the CO 2 -bound crystals had been fully cryocooled, the crystal-pressurizing CO 2 gas was released and the crystal samples were stored in a liquid-nitrogen dewar until X-ray data collection. Note that once cryocooled, the CO 2 -bound research papers IUCrJ (2018 hCA II crystals were handled just like normal protein crystals and were flash-cryocooled at ambient pressure. X-ray diffraction and data collection Diffraction data were collected on CHESS beamline F1 (wavelength of 0.9180 Å , beam size of 100 mm) under a nitrogen cold stream (100 K). Data were collected using the oscillation method in intervals of 1 on an ADSC Quantum 270 CCD detector (Area Detector Systems Corporation) with a crystal-to-detector distance of 100 mm. For the 7.0 atm CO 2 hCA II data set (0.9 Å resolution), an initial data set consisting of 180 images was collected with 1 s exposures to cover diffraction resolution up to 1.1 Å . The detector was then offset to cover diffraction resolution up to 0.88 Å , and a second data set consisting of 360 images was collected with 10 s exposures. For the 2.5 atm CO 2 hCA II data set (1.0 Å resolution), a single data set consisting of 360 images was collected with 10 s exposure for each image. For each X-ray data set, the estimated absorbed X-ray dose was $2 Â 10 7 Gy. No significant diffraction resolution decay was observed up to this X-ray dose. Indexing, integration, merging and scaling were performed using HKL-2000 (Otwinowski & Minor, 1997). Data-processing statistics are given in Table 1. Structure determination and model refinement The structures of hCA II at CO 2 pressures of 7.0 and 2.5 atm were determined using the CCP4 program suite . Prior to refinement, a random 5% of the data were flagged for R free analysis. The previously determined 1.1 Å resolution crystal structure (PDB entry 3d92; Domsic et al., 2008) was used as the initial phasing model. Maximumlikelihood refinement (MLH) was carried out using REFMAC5 (Murshudov et al., 2011) and the water molecules were automatically picked up using ARP/wARP (Perrakis et al., 1999) during the MLH cycles. The refined structures were manually checked using the molecular graphics program Coot (Emsley & Cowtan, 2004). Reiterations of MLH refinement were carried out with anisotropic B factors and riding H atoms. The partial occupancies of W1 in 7.0 and 2.5 atm CO 2 hCA II were estimated such that the electron density in the F o À F c map disappears. The refinement statistics are given in Table 1. We also re-refined the water molecules in the three previously reported structures (PDB entry 5dsi for 15 atm CO 2 hCA II, PDB entry 5dsj for 15 atm CO 2 hCA II -50s and PDB entry 5dsn for CO 2 hCA II -1h; Kim et al., 2016) for accurate comparison of the bound water molecules in the active site and the entrance conduit. The re-refined structures The active site of hCA II at different internal CO 2 pressures. (a) 15 atm CO 2 hCA II, (b) 7.0 atm CO 2 hCA II, (c) 2.5 atm CO 2 hCA II, (d) 15 atm CO 2 hCA II -50s, (e) 15 atm CO 2 hCA II -1h. The electron-density (2F o À F c ) map (in blue) is contoured at 1.3, except for W 0 I in (d), which is contoured at 1.0. The intermediate waters W I and W 0 I are coloured light grey and the entrance-conduit water W 0 EC1 is coloured cyan for clarity. Note that CO 2 is fully bound in the active site in (a) and (b). Concurrent with the decrease in CO 2 pressure, the electron density for CO 2 fades out in (c) and is eventually replaced by two water molecules in the CO 2 binding site (d, e). Note also the dynamic changes reflected by the electron densities of W I , W 0 I and W1 that take place as the internal CO 2 pressure decreases. These events are more explicitly explained in Fig. 4. were updated in the PDB with the new PDB codes 5yui (superseding 5dsi), 5yuj (superseding 5dsj) and 5yuk (superseding 5dsn). Details of the structural analysis of the bound water molecules are given in the Supporting Information. All structural figures were rendered with PyMOL (Schrö dinger). Results and discussion 3.1. CO 2 binding sites: active site (CO 2 /W Zn /W DW /W 0 0 0 DW ) and secondary CO 2 site near Phe226 Structural examinations show that the five structures are very similar. The all-protein-atom r.m.s.d.s between 15 atm CO 2 hCA II and the other four structures (7.0 atm CO 2 hCA II, 2.5 atm CO 2 hCA II, 15 atm CO 2 hCA II -50s and 15 atm CO 2 hCA II -1h) are 0.14, 0.12, 0.10 and 0.13 Å , respectively. Although changes in the zinc-coordinating histidines (His94, His96 and His119) are negligible between the structures, the electron densities for the CO 2 binding site differ significantly, as expected (Fig. 2). While 15 and 7.0 atm CO 2 hCA II show a clear position for the CO 2 (Figs. 2a and 2b), deterioration of electron density for the CO 2 site occurs in the 2.5 atm CO 2 hCA II structure, represented by sparsely connected lobes (Fig. 2c). When the density is modelled and refined with only CO 2 , the CO 2 occupancy is at most 0.7. This difference suggests that the CO 2 site is occupied by both CO 2 and two waters (deep waters W DW and W 0 DW ) at a pressure of 2.5 atm. The manifestation of W DW at this pressure is supported by the extended electron-density connection from CO 2 to W I (Supplementary Fig. S1). In 15 atm CO 2 hCA II -50s, the electron density for the CO 2 binding site is further shifted towards Zn and W Zn , which correlates with the known positions of W DW and W 0 DW (Fig. 2d). This argues that 2.5 atm CO 2 hCA II has a higher internal CO 2 pressure than 15 atm CO 2 hCA II -50s. Finally, in 15 atm CO 2 hCA II -1h, the electron density of the CO 2 binding site splits into two distinct lobes, indicating that the CO 2 site is completely replaced by W DW and W 0 DW (Fig. 2e). Previously, the binding of a secondary CO 2 molecule which is 15-16 Å away from the active-site CO 2 molecule was reported in a hydrophobic pocket created by Val223 and Phe226 (Domsic et al., 2008). Comparison of the 15 atm CO 2 hCA II and 15 atm CO 2 hCA II -1h structures in this region indicates that the side chain of Phe226 must rotate to accommodate the secondary CO 2 molecule (Supplementary Figs. S2a and S2e). Interestingly, in the lower pressured 7.0 atm CO 2 hCA II, the subdued electron density for the secondary CO 2 results in dual conformations of the Phe226 side chain (Supplementary Fig. S2b). In the cases of 2.5 atm CO 2 hCA II and 15 atm CO 2 hCA II -50s, the secondary CO 2 Rotameric states of His64 and solvent positions at different internal CO 2 pressures. (a) 15 atm CO 2 hCA II, (b) 7.0 atm CO 2 hCA II, (c) 2.5 atm CO 2 hCA II, (d) 15 atm CO 2 hCA II -50s, (e) 15 atm CO 2 hCA II -1h. The electron density (2F o À F c ) for W 0 I in (d) is contoured at 1.0. In all other cases, the electron density (2F o À F c ) for His64 and the electron density (2F o À F c ) for waters are contoured at 1.5 and 1.3, respectively. The intermediate waters W I and W 0 I are coloured light grey and the entrance-conduit water W 0 EC1 is coloured cyan for clarity. As the internal CO 2 pressure decreases, W2 0 gradually dissipates and the His64 side chain shifts from the 'out' to the 'in' position from (a) to (e). The intermediate water, W I , is clearly observed in (a) and the electron density gradually subsides (b, c) and finally disappears (d, e). In accordance to the decrease in W I , electron density for W1, which is not observable in (a), appears in (b) and subsequently increases gradually (c, d, e). When the models are refined with partial occupancy, the W1 occupancies are 0.8 in (b) and 0.9 in (c). Interestingly, the electron density for the newly observed intermediate water W 0 I increases gradually from (a) to (c), but decreases in (d) and disappears in (e). The measured distance between W I and W1 in (b) and (c) is 2.0 Å . The electron density for W3a is well isolated in all cases, but W3b shows an alternate position W3b 0 in (a) which grows in (b) but subsequently disappears (c, d, e). was not present and the Phe226 side chain sits in the position observed in the CO 2 -free 15 atm CO 2 hCA II -1h (Supplementary Figs. S2c, S2d and S2e). Hence, the observation of the secondary CO 2 and the dual conformations of the Phe226 side chain in 7.0 atm CO 2 hCA II imply that 7.0 atm CO 2 hCA II has a higher internal CO 2 pressure than 15 atm CO 2 hCA II -50s. 3.2. His64 and the water network (W1/W I /W2/W2 0 0 0 /W3/W3a/ W3b) near the active site As described above, structural examinations of the CO 2 binding sites suggest that both 7.0 atm CO 2 hCA II and 2.5 atm CO 2 hCA II have a higher internal CO 2 pressure than 15 atm CO 2 hCA II -50 s. Furthermore, 7.0 atm CO 2 hCA II intuitively has a higher internal CO 2 pressure than 2.5 atm CO 2 hCA II, hence leading to the conclusion that the internal CO 2 pressure decreases in the sequence 15 atm CO 2 hCA II, 7.0 atm CO 2 hCA II, 2.5 atm CO 2 hCA II, 15 atm CO 2 hCA II -50s, 15 atm CO 2 hCA II -1h. Such an interpretation ascertains that 7.0 atm CO 2 hCA II and 2.5 atm CO 2 hCA II are intermediate states that fill the gaps between the 15 atm pressurized CO 2 hCA II and the earliest time point of CO 2 release (15 atm CO 2 hCA II -50s) observed in the previous study (Kim et al., 2016). On this foundation, His64 and the water network near the active site were analyzed in order of decreasing internal CO 2 pressure (Fig. 3). Although the side chain of His64 lies predominantly in the 'out' position in 15 atm CO 2 hCA II (Fig. 3a), the electron density of His64 infers that it occupies dual 'out' and 'in' positions as the internal CO 2 pressure decreases (Figs. 3b, 3c and 3d). However, in the CO 2 -free 15 atm CO 2 hCA II -1h, His64 is observed to primarily occupy the 'in' position ( Fig. 3e). In concert with His64 moving from the 'out' to the 'in' position, the density for W2 0 (an alternate position of W2) is observed to gradually dissipate. In the previous studies, it has been recognized that when CO 2 is fully bound in the 15 atm CO 2 hCA II structure, W I but not W1 is observed (as in Figs. 2a and 3a; Kim et al., 2016). However, this W I disappeared in 15 atm CO 2 hCA II and W1 was observed to appear instead in the CO 2 -free 15 atm CO 2 hCA II -1h (as in Figs. 2e and 3e; Kim et al., 2016). Because the measured distance between W I and W2 is $4.8 Å , the hydrogen-bonded water network (via W1, W2 and His64) necessary for the proton-transfer wire was presumed to be broken when CO 2 fully binds to the active site. In this study, we observed the dynamic replacement of W I with W1 as the internal CO 2 pressure decreases, since dually occupied Solvent positions in the entrance conduit. (a) 15 atm CO 2 hCA II, (b) 7.0 atm CO 2 hCA II, (c) 2.5 atm CO 2 hCA II, (d) 15 atm CO 2 hCA II -50s, (e) 15 atm CO 2 hCA II -1h. In all cases the electron-density (2F o À F c ) maps are contoured at 1.3, except for the electron density (2F o À F c ) for W 0 I in (d), which is contoured at 1.0. The intermediate waters W I and W 0 I are coloured light grey and the entrance-conduit waters are coloured cyan for clarity. As the internal CO 2 pressure decreases, alternate positions appear and disappear, especially for W EC1 , W EC2 and W EC5 , suggesting dynamical motions that are correlated with the dynamical changes in W I , W 0 I and W1 (a, b, c). For example, note that W 0 EC1 in (a) and (b) is located next to the electron density for W3b 0 , attesting to the interaction between the two. As W I and W 0 I disappear in (d) and (e) together with appearance of W1, the entrance-conduit water molecules return to the singly ordered positions (d, e). positions of W1 and W I are seen for 7.0 and 2.5 atm CO 2 hCA II (Figs. 3b and 3c). The reduction of electron density for W I is observed in the order 15, 7.0 and 2.5 atm CO 2 hCA II, with complete disappearance in 15 atm CO 2 hCA II -50s and 15 atm CO 2 hCA II -1h (Fig. 3). In contrast, W1 electron density starts to emerge in the 7.0 atm CO 2 hCA II, is more pronounced in 2.5 atm CO 2 hCA II, and is fully occupied in 15 atm CO 2 hCA II -50s and 15 atm CO 2 hCA II -1h. The close 2.0 Å distance between W1 and W I in 7.0 and 2.5 atm CO 2 hCA II suggests that W1 and W I exhibit partial occupancies rather than being two separate, stable waters. The inverse correlation, with a decrease in W I and increase in W1 electron density upon decreasing internal CO 2 pressures, suggests that W I moves to the W1 position upon CO 2 release. (Figs. 2, 3b and 3c). When the previously reported structures were compared, W 0 I existed in the 15 atm CO 2 hCA II and 15 atm CO 2 hCA II -50s structures, but was overlooked because of its faint electron density (Figs. 2, 3a and 3d). Compared with W I , W 0 I is located farther away from the active site and more towards the entrance that connects the active site of hCA II to bulk solvent. Because the entrance is near the active site where water, substrate and product (CO 2 / bicarbonate) can interchange/interact with bulk solvent, we will refer to it as the 'entrance conduit' (Fig. 1). The conduit consists of the hydrophobic residues Leu198, Val135, Leu204, Pro202 and Phe131 on one side, and the hydrophilic residues His64, Gln92 and Thr200 on the other. It should also be noted that the proton-shuttling His64 is positioned perpendicularly to this entrance conduit (Fig. 1). A close inspection of the structures further identified five water molecules (named entrance-conduit waters or W EC s) that are ordered along the surface of the entrance conduit in the CO 2 -free 15 atm CO 2 hCA II -1h (sequentially named counterclockwise as W EC1 , W EC2 , W EC3 , W EC4 and W EC5 starting from the one closest to water W3b; Fig. 4). Unlike for W EC3 and W EC4 , alternate positions for W EC1 (W 0 EC1 and W 00 EC1 ), W EC2 (W 0 EC2 ) and W EC5 (W 0 EC5 ) exist in the different internal CO 2 pressure structures. Tight hydrogen-bonding networks stabilize W EC1 , W EC2 , W EC3 , W EC4 and W EC5 , which are mediated by residues lining the entrance conduit ( Supplementary Fig. S3). For instance, the side-chain amide N atom of Gln92 binds to W EC2 , and the main-chain carbonyl O atom of Pro201 and the side-chain hydroxyl O atom of Thr200 bind to W EC5 , which are conserved throughout all of the internal CO 2 pressure structures. Hydrogen-bonding interactions also exist in all of the structures between the five W EC waters (W EC1 -W EC2 , W EC2 -W EC3 , W EC3 -W EC4 and W EC4 -W EC5 ). The Entrance-conduit water dynamics. (a) 15 atm CO 2 hCA II, (b) 7.0 atm CO 2 hCA II, (c) 2.5 atm CO 2 hCA II, (d) 15 atm CO 2 hCA II -50s, (e) 15 atm CO 2 hCA II -1h. In all cases, the electron-density (2F o À F c ) maps are contoured at 1.3. The intermediate water W I is coloured light grey and the entranceconduit waters are coloured cyan for clarity. Although all five W EC waters (W EC1 , W EC2 , W EC3 , W EC4 and W EC5 ) exist in all of the structures regardless of the different internal CO 2 pressures, dramatic variations of W EC1 (near to the proton-shuttling His64), W EC2 (near to W I , W 0 I and W1) and W3b 0 are manifested by multiple alternate positions during the internal CO 2 pressure decrease. These observations indicate that the waters of the proton-transfer Although all five W EC waters are present regardless of the different internal CO 2 pressures, some perturbations of W EC1 (near to the proton-shuttling His64) and W EC2 (near to W I , W 0 I and W1) were observed during the internal CO 2 pressure decrease, which are manifested by multiple alternate positions (Fig. 5). The dynamic motions of W EC waters imply their direct interplay with the proton-transfer water network in the active site. Specifically, interactions between W EC1 and W3b were observed. Previously, the positions of W3a and W3b were thought to be invariant and singly occupied regardless of CO 2 binding, leading to the belief that the main role of W3a and W3b was to stabilize the W2 water molecule that is directly located within the proton-transfer wire. However, in this study an alternate position of W3b [named W3b 0 , which is different from the two alternative positions (W3b 0 and W3b 00 ) in CO 2bound apo CA II in Kim et al. (2016)] was observed along with an alternate position of W EC1 (named W 0 EC1 in this study) in 15 and 7.0 atm CO 2 hCA II (Figs. 5a and b). In these structures, the distances between W3b, W3b 0 , W EC1 and W 0 EC1 are so close (1.3-1.7 Å ; Supplementary Fig. S5) that they organize into a continuous tube of electron density (Fig. 5). W3b 0 and W 0 EC1 disappear in lower internal CO 2 pressure structures, with W EC1 recovering to the singly occupied position (Figs. 5c, 5d and 5e). These results suggest that the waters in the protontransfer network (W1/W2/W2 0 /W3a/W3b/W3b 0 ), the intermediate waters W 0 I /W I and the water network of the entrance conduit (W EC1 /W EC2 /W EC3 /W EC4 /W EC5 ) all act interdependently with their motions correlated. Mechanism of the restoration of the proton-transfer water network By lowering the CO 2 pressure in hCA II crystals, we captured additional intermediate states, including dual occupied positions of W1 and W I , an active site partially occupied with CO 2 , W DW and W 0 DW , a new intermediate water W 0 I and an alternate position of W3b (W3b 0 ). By realizing that W I and W 0 I are transiently stabilized by several entrance-conduit water molecules and that they rearrange during the restoration of the proton-transfer water network, we propose the sequential events in the formation of the water network that replenishes W Zn and the consequential connection of the His64-mediated proton-transfer wire during the catalytic turnover of hCA II. Although these events have been postulated from the observations during CO 2 release in this study, these mechanisms may also account for restoration of the water network after bicarbonate release, assuming that both CO 2 and bicarbonate molecules do not directly mediate the water-network restoration process. It is observed that only W I , and not W1, is observed near the active site of hCA II in the fully CO 2 -bound state (Fig. 6a) Proposed mechanism of water-network restoration. (a) In hCA II with fully bound CO 2 , W Zn exists as OH À and nucleophilic attack occurs to form bicarbonate. In this state, only W I is present and not W1, suggesting that the water network for proton transfer is broken. (b) The sequential events for water-network restoration as the product bicarbonate leaves the active site. After the intermediate water W I fills the places for W1 and W DW , and these W1 and W DW waters or W I can move to the W Zn position. W DW also fills the position for W 0 DW . Note that four waters (W1, W Zn , W DW and W 0 DW ) are eventually filled in from W I during this water-network restoration process. A newly found intermediate water W 0 I is located between W I and the outside bulk solvent, is stabilized by the entranceconduit waters and seems to facilitate the fast charging process of W I . (c) Only after the water network is restored can proton transfer can occur from W Zn to the outside through W1/W2/W2 0 /His64 in /His64 out . Now, with CO 2 binding, hCA II is ready for the next catalytic turnover. of W1 suggests that the hydrogen-bonded water network from W Zn to His64 (charged and in the 'out' position) is disrupted. In fact, when hCA II is fully CO 2 -bound, the proton transfer should have already happened to result in the deprotonation of W Zn to OH À , which is necessary for the nucleophilic CO 2 attack (the first step in equation 1). After bicarbonate formation via this nucleophilic attack, the product bicarbonate subsequently leaves the active site and W Zn is replenished along with restoration of the active-site water network prior to the proton-transfer process from W Zn . It is likely that W Zn replenishment and water-network restoration are directly mediated by the transient waters W I and W 0 I . After bicarbonate diffuses out of hCA II, W I seems to immediately fill the positions of both W1 and W DW (Fig. 6b). This directional branching movement of W I is predictable since the distance between W I and W1 is 2.0 Å and the distance between W I and W DW is 2.4 Å . This interpretation is further supported by the observation that the electron density of W1 emerges as that of W I disappears (Figs. 3a, 3b, 3c and 3d), as well as by the observation that the electron density of W I is fused to the electron density of W DW ( Supplementary Fig. S1). Subsequently, W1 can move to W Zn (the distance between W1 and W Zn is 2.6 Å ), and W DW can shift to either W Zn or W 0 DW (the distances from W DW to W Zn and from W 0 DW are 2.4 and 2.2 Å , respectively). Judging by the distance from W I to W Zn (2.6 Å ), W I can also directly flow into the W Zn position (Fig. 6b). As W I replenishes multiple water positions (W1/W DW /W 0 DW / W Zn ), it is important that the bulk solvent supplies the W I position rapidly (acting as a water reservoir), a process that seems to be facilitated by W 0 I . W 0 I is separated from W I by 2.2 Å , is located closer to the bulk solvent and is transiently stabilized by the dynamic motions of water molecules in the entrance conduit, which take place in concert with the changes of solvent in the active site ( Fig. 4 and Supplementary Fig. S4). As the W1/W DW /W 0 DW /W Zn positions are filled, the intermediate water W I is destabilized by steric hindrance with W1 (the distance between W1 and W I is only 2.0 Å ). In addition, the dynamic motions of water molecules in the entrance conduit decrease as the water network is restored in the active site (Fig. 5), which causes destabilization of W 0 I . Therefore, the intermediate waters W I and W 0 I disappear. Finally, the activesite water network is fully restored and proton transfer occurs from W Zn to His64 (uncharged and in the 'in' position) via W1/W2/W2 0 (Fig. 6c). Conclusions Structural comparisons between hCA II in complex with CO 2 and during its release reveal intermediate snapshots during the water-network rearrangement in the active site as the waters fill the void following CO 2 liberation. Based on our observations, insight into the water-network restoration prior to proton transfer is proposed. While previous studies of the catalytic activity of hCA II have mainly focusing on the CO 2 binding site (Zn/W Zn /W DW ) and the proton-transfer water network (W1/W2/W3a/W3b), our results indicate that the intermediate and alternate waters (W 0 I , W I , W2 0 and W3b 0 ) and the entrance-conduit waters (W EC1 , W EC2 , W EC3 , W EC4 , W EC5 and their alternative positions) are critically involved in catalysis by hCA II. The substrate CO 2 enters via the hydrophobic half of the active site, while the product HCO 3 À , being a charged molecule, exits by perturbing the ordered waters that fill the hydrophilic half of the active site (Silverman et al., 1979;Koenig et al., 1983). Thus, the ordered waters within the active site and its vicinity are likely to exist in a state of intermittent rearrangement during the forward and reverse reactions of catalysis. Taken collectively, our results provide snapshots of low-energy stages of water rearrangement during catalysis. Future mutation studies to perturb the protein regions that stabilize these waters would provide more evidence of their roles in the reaction. Moreover, our results suggest that the catalytic activity of hCA II can be more thoroughly understood with the 'extended' catalytic waters (W DW /W 0 DW /W Zn / W1/W I /W 0 I /W2/W2 0 /W3a/W3b/W3b 0 /W EC1 -W EC5 ). Molecular dynamics simulations on this extended water network may reveal further insights into the bioenergetic mechanisms utilized by hCA II to generate ordered water networks from the surrounding disordered bulk solvent (Riccardi et al., 2006;Roy & Taraphder, 2007).
2018-04-03T05:31:51.421Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "83e61cd129f85fe9da82508d477d4f1851e8cb27", "oa_license": "CCBY", "oa_url": "https://journals.iucr.org/m/issues/2018/01/00/mf5021/mf5021.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a97b677f1857999a213d50828b402c732c63492", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
262147610
pes2o/s2orc
v3-fos-license
Machine Learning‐Enabled Tactile Sensor Design for Dynamic Touch Decoding Abstract Skin‐like flexible sensors play vital roles in healthcare and human–machine interactions. However, general goals focus on pursuing intrinsic static and dynamic performance of skin‐like sensors themselves accompanied with diverse trial‐and‐error attempts. Such a forward strategy almost isolates the design of sensors from resulting applications. Here, a machine learning (ML)‐guided design of flexible tactile sensor system is reported, enabling a high classification accuracy (≈99.58%) of tactile perception in six dynamic touch modalities. Different from the intuition‐driven sensor design, such ML‐guided performance optimization is realized by introducing a support vector machine‐based ML algorithm along with specific statistical criteria for fabrication parameters selection to excavate features deeply concealed in raw sensing data. This inverse design merges the statistical learning criteria into the design phase of sensing hardware, bridging the gap between the device structures and algorithms. Using the optimized tactile sensor, the high‐quality recognizable signals in handwriting applications are obtained. Besides, with the additional data processing, a robot hand assembled with the sensor is able to complete real‐time touch‐decoding of an 11‐digit braille phone number with high accuracy. To achieve a desirable sensor, general goals target at acquiring the high-performance properties, such as superior sensitivity, wide working range, excellent repeatability, etc. [22] This is typically accompanied with diverse trial-and-error attempts based on intuition or quasi-random fabrication parameters. [23]Specific applications are then performed using the optimal device with the decent computational analysis of data output.However, such a forward strategy almost isolates the design of sensors and resulting applications, which may cause increasing data burden, weaken the personalized signal features, and reduce the efficiency of computational analysis. Such a data-driven design approach links the training data with the target sensing information, which bridges the gap between the hardware (device structures) and software (algorithms). In this work, a co-designed flexible sensor system realized by an ML-reinforced design strategy is proposed to optimize the output signals of laser-induced graphene (LIG)-based tactile sensors via signal quality enhancement under various evaluation criteria.Owing to the task-oriented requirement in dynamic friction interactions between the sensor and measured objects, the discrimination of signals induced by six representative dynamic touch modalities is considered as a generalized evaluation criterion during the sensor design optimization.These criteria, based on SVM and several statistical standards, are applied to intelligibly disclose the quality of signals, enabling a high classification accuracy (≈99.58%) of tactile perception in six dynamic touch modalities.This co-designed system is able to realize visual discrimination in handwriting signals and recognition of braille numbers. Results and Discussion In the proposed ML-assisted sensor design, a set of parameters related to triboelectric nanogenerator (TENG)-based tactile sensing performance under contact-sliding mode is selected as the initial optimization targets, including types of the readout signals (e.g., current and voltage), distribution of electrodes, shapes, and density of microstructures (Figure 1).Six dynamic touch modalities are performed on the sensor by the finger skin, including press, pat, up, down, left, and right sliding.These multidirectional interactions are selected for targeted tasks to evaluate each fabrication parameter.The signal qualities are evaluated by an SVM-based ML algorithm and several statistical criteria to screen the optimal value for each fabrication parameter.The evaluation and optimization process are repeated until all targets are optimized, giving rise to a group of desirable parameters for sensor fabrication and data acquisition.Different from the general intuition-driven sensor design, this work provides a data-driven approach to optimizing fabrication parameters for the targeted applications, which closely connects the device design and algorithms. Figure 2a illustrates fabrication processes of a flexible TENGbased tactile sensor.First, the interdigital electrodes were fabricated via a digital infrared (IR) laser direct writing technique.By virtue of the laser-induced thermal effect, LIG patterns could be obtained by laser carbonization of a polyimide (PI) film.[28][29][30][31][32][33] Then, the LIG/PI film was spin-coated with polydimethylsiloxane (PDMS).After the PDMS solution infiltrates into the porous LIG, a heatcuring process was performed, followed by peeling off the PI to achieve a thin LIG/PDMS film.To introduce additional electronegative groups as well as remove the surface impurities, oxygen plasma treatment of LIG/PDMS was conducted.The next crucial process was the alignment of Fluorinated ethylene propylene (FEP) on LIG/PDMS, which aims to enhance the triboelectric effect during contact-sliding processes.Finally, the device was encapsulated by a layer of PDMS.During the co-designed process, the grid-like and fingerprint-like microstructures were created on the top layer of encapsulation PDMS by laser texturing. To validate the properties of LIG before and after transfer, the surface morphology and Raman spectra of LIG/PI and LIG/PDMS were characterized, respectively.During the laser engraving process, the rapid release of gas leads to the high porosity of structures (Figure 2b).After the transfer of LIG onto PDMS, the porous structures are well preserved (Figure 2c).Furthermore, the LIG presented similar Raman peaks (i.e., D, G, and 2D peaks) to graphene at 1349, 1589, and 2711 cm −1 , respectively (Figure 2d).However, the LIG/PDMS is observed with relatively weak intensity of D and G peaks, probably caused by the mechanical delamination to introduce additional defects. [34]s illustrated in the fabrication steps, the TENG-based tactile sensor is composed of four layers including top and bottom PDMS encapsulations, interdigital LIG electrodes and a FEP film (Figure 2e).When the finger touches the surface of PDMS, human skin is positively charged due to their different electron affinities.Based on the principle of electrostatic induction, the horizontal sliding of charged finger creates electron transfer with induced current between the two LIG electrodes (Figure 2f).In such a case, the interdigital design of LIG electrodes allows for a gradient electron transfer that the induced current alternates negatively and positively to balance the potential difference between the two electrodes.This mechanism effectively incorporates information related to dynamic touch modalities into the raw data.Notably, two conductive electrodes are applied for the free-standing TENG rather than utilize the grounding method.Therefore, the accumulated charges of a moving object can be transported between the cross-distributed interdigital electrodes for facile measurements. To characterize the performance of this tactile sensor, opencircuit voltage, and short-circuit current were recorded as functions of contact forces, frequencies, areas, and materials.For contact forces, both open-circuit voltage and short-circuit current of TENG are observed with gradient increases as the force increases from 0.3 N to 20 N (Figure 2g; Figure S1a,b, Supporting Information).With the increase of contact frequencies, the short-circuit current presents a positive correlation while the open-circuit voltage negligibly changes (Figure S1c, Supporting Information).This proves that the output voltage of proposed TENG is independent of frequency.A minor degradation of short-circuit current is captured at the frequency of 8 Hz probably due to the saturation of ionized charges (Figure 2h). To enhance the performance of tactile sensor, the effect of contact area and dielectric properties of contact materials were investigated.Notably, both open-circuit voltage and short-circuit current evidently increase with contact area from 1-4 cm 2 (Figure 2i; Figure S1d, Supporting Information).In addition, contact materials with high permittivity and triboelectric effect, such as copper, human skin, and FEP film, generate short-circuit current with nearly three times higher (> 350 nA) than tissue paper (<50 nA), illustrating a more remarkable signal-to-noise ratio (Figure 2j).Since the tactile sensor was designed for feature signal extraction in a touch-sliding process, a high signal-to-noise ratio plays a significant role in the data analysis.Based on the sensing principle, the small interferences (<7.5%) caused by both high moisture (>90% RH) and temperature variations (25-60 °C) on the output amplitude of this tactile sensor is almost negligible (Figure S2, Supporting Information).In this case, an FEP film was chosen as the middle layer material between the interdigital LIG electrodes and PDMS to further enlarge the contact signals.To validate the durability of this device, over 60 000 times of repeated slaps were conducted.The negligible change in performance reflects the stability and signal repeatability of TENG as a tactile sensor (Figure 2k). The design of TENG-based flexible tactile sensor reinforced by ML involves several fabrication parameters, especially the arrangement of electrodes, shape, and density of surface microstructures (Figure 3a).Signal qualities were assessed to quantify the effects of these parameters.It should be noted that criteria to evaluate signal quality should be as plain and general as possible for disclosing the properties of signals, as a sophisticated one may muffle them.This often happens along with an overfitting problem, where a strong criterion, such as neural network, reports high scores regardless of the signals' quality.Thus, this study proposed two easily implemented criteria, i.e., the average classification accuracy by a set of SVMs and the separability of signals in a dimensionality-reduced space, to select a group of optimal fabrication parameters.Given a set of categorized data, N SVM classifiers were implemented in parallel to classify them, during which an average classification accuracy 1 N ∑ N k = 1 Acc k was obtained as the first criterion (Figure 3b).Next, the t-distributed stochastic neighbor embedding (t-SNE) algorithm was employed to reduce the dimensionality of raw signals into a 2D space (range of its two dimensions were projected into [0, 1]), where the separability of signals was evaluated as diagrammed in Figure 3c.Specifically, the clustering center of each category was computed as the mean value of involved data, followed by the computation of Euclidean distance between each pair of centers as the next criterion (e.g., e ij indicates the Euclidean distance between clustering center i and j).In addition, the dispersion of each cluster was approximated by the standard deviation i .The three indicators reflected the properties of collected signals, giving rise to the total criterion as: where N is the number of SVM classifiers, C is the number of clustering centers.It should be noted that we performed a normalization technique on three indicators to project them into the range [0, 1], in order to eliminate the effect of dimensions.Additionally, since smaller deviation was expected for signals of higher repeatability, i was modified as its negative logarithm form − log i .By maximizing the criterion score in Equation ( 1), the fabrication parameters can be optimized. In order to generate the categorized data to support the proposed criteria, six dynamic touch modalities were performed on the tactile sensor to produce datasets for the optimization of fabrication parameters.The interdigital LIG electrodes can generate alternating signals based on the working mechanism of TENG.These signals exhibit distinct temporal features dominated by different sliding directions.Therefore, the six touch modalities are determined as press, pat, up, down, left, and right sliding.In the process of data collection, a semi-automatic annotation strategy is proposed to segment and annotate the collected data as illustrated in Figure S3 (Supporting Information) and summarized in Algorithm S1 (Supporting Information), where an anchor sample is applied to search for the alternative data samples. As the induced current and voltage are highly associated with the sensing performance of TENG-based tactile sensor, both types of signal output are compared.First, the repeatability of signals is examined as shown in Figure S4a denser arrangement of electrodes allows more sensitive subtle peaks in the signals, but it also generates more superimposed peaks, leading to the difficulty in signal recognition by an ML algorithm.After normalization, the scores were [1, 1, 1] and [0, 0, 0], resulting in total criterion scores of three and zero, respectively.Therefore, the sparse arrangement of electrode pairs is selected in the proposed design. After evaluating the types of output signals and the density of electrodes, the output comparison of surface microstructures with different shapes and intervals textured by laser is further investigated.Similar to the natural grooves on finger skin, rugged microstructures on top layer of PDMS are designed to reduce the disturbance of viscous effect on signal quality, resulting in the further enhancement of data discrimination.To validate it, two microstructures commonly applied on tactile sensors were stud-ied, including fingerprint-like and grid-like shapes with different groove intervals (Figure 3a).Via comparing the voltage output of six touch modalities, Figure S6 (Supporting Information) illustrates that data from the fingerprint-like structure show a concentrated distribution around the average values, while data from the grid-like structure exhibit more instability.Besides, the results of dimensionality reduction shown in Figure 3e,f indicate that the data of the fingerprint-like structure present a stronger separability with a much more concentrated distribution within classes, compared to that of the grid-like structure.The calculated scores based on the proposed criteria for the fingerprint-like and gridlike structures are [96.50%,4.227, 14.958] and [95.056%, 3.725, 11.783], respectively.After normalization, the scores are transformed into [1, 1, 1] and [0, 0, 0], respectively, indicating that the fingerprint-like structure is more appropriate for the sensor.Furthermore, the performance of fingerprint-like microstructure is found to be affected by the interval between adjacent grooves (Figure S7a, Supporting Information).During the codesign process, two sensors are first fabricated with groove intervals of 200 and 300 μm, respectively and the data corresponding to six touch modalities are collected.The proposed criteria are used to calculate their scores, resulting in [93.533%, 3.581, 12.670] and [94.558%, 3.561, 11.283] for 200 and 300 μm, respectively (Figure S7b,c, Supporting Information).Similar results are observed in Figure 3g,h.It suggests that a smaller interval between grooves leads to the better performance of sensor.Therefore, a fingerprint-like pattern with a groove interval of 100 μm is produced on the sensor's surface, which almost reaches the processing limit of this IR laser system.Compared to sensors with the larger intervals of 200 and 300 μm, the sensor with a fingerprint-like interval of 100 μm exhibits better repeatability, contributing to a more concentrated clustering within classes (Figure 3i; Figure S7d, Supporting Information).The clearer boundaries among different classes also indicate higher separability of its raw data.This sensor achieves scores of [98.033%, 4.044, 12.535].After normalization, scores of [1, 0.902, 1], [0.042, 1, 0], and [0, 0, 0.228] are obtained for groove intervals of 100, 200, and 300 μm, respectively.As a result, the total scores are calculated as 2.902, 1.042, and 0.228, indicating that a groove interval of 100 μm is the optimal selection. In summary, to enhance the performance of TENG-based tactile sensor, a set of evaluation criteria is developed and applied to select the desired type of output signals and optimize the fabrication parameters.Comparison results are summarized in Table 1.After evaluation, voltage-based signals are selected as the sensor output.Meanwhile, sparsely distributed electrode pairs and fingerprint-like microstructures with the smaller interval contribute to the enhancement of sensor performance.Figure 3j exhibits the output of sensor with the optimized parameters on six touch modalities, enabling a remarkable classification accuracy of 99.58% by a tuned linear SVM classifier. Benefiting from the high recognition accuracy of tactile sensor optimized by machine learning, the recognizable signal differences of finger sliding at four different directions on the device surface are observed (Figure 4a,b).A critical feature of freestanding TENG is that the proximity of an arbitrarily charged object can easily cause a significant increase in open circuit voltage, while a continuous, weak variation of subtle signals is induced by the sliding process.Notably, these hidden peaks displayed in insets represent the dynamic friction process.Based on the aforementioned ML-assisted selection of the readout signals, the open-circuit voltage achieves higher scores for the separability and dispersion of data than that of short-circuit current.Thus, the handwriting of different letters is identified under voltage measurement to realize character recognition (Figure 4c). Figure 4d-h present the real-time recognition of English words and sentences using the sensor.For different words and sentences, it is easy to distinguish them by observing the signal features.However, for the same word in uppercase and lowercase, like "ZJU" and "zju", it is a bit difficult to recognize them only relying on the voltage signals.Thus, to further optimize the capability of signal recognition, additional ML algorithms are required for computational analysis. To mimic the superior tactile behavior of human skin, the proposed tactile sensor is integrated onto a robot finger for braille recognition (Figure 5a).With the addition of ML algorithms, the robot-sensor system works at a closed loop of sensing, extracting, computing, and decoding information (Figure 5b).Compared to the handwriting applications, braille recognition is more chal-lenging due to the recognition mode shifts from a single-point contact to a multi-point contact in a dynamic process.Initially, touch signals of ten braille numbers are collected via the robot hand (Figure 5c).Through preprocessing by a high-pass filter, the subtle signals induced by sliding process are extracted.In the process of data acquisition, great shape similarity in these braille numbers leads to the recognition difficulty of corresponding signals via visual discrimination (Figure 5d).This is also illustrated by the overlaps among clusters in the dimensionality-reduced results (Figure 5e).Thus, a one-dimensional CNN model is utilized to automatically perform feature learning and classification after data acquisition (≈320 times per number) (Figure 5f).Attributed to the powerful non-linear fitting capability of CNN, an average classification accuracy of 96.12% is achieved (Figure 5g).As can be seen, most of the braille numbers achieve a classification accuracy over 95% (i.e., digit 1, 2, 4, 5, 6, 7, 9), while the lower accuracies are observed in the digit 0, 3, 8.This tiny performance difference probably results from the force control accuracy and repetition accuracy of the manipulator during data acquisition.Such phenomena may also affect the accuracy of online recognization of braille numbers, resulting in a classification accuracy of 93.33% in online decoding process (Video S1, Supporting Information). An additional experiment was performed to evaluate the sensor's recognition accuracy before the removal and after reinstallation.The data collected before the detachment were synthesized as datasets S train and S 1 test , which were used to train and subsequently test the classifier, respectively.Another dataset denoted as S 2 test was collected after the sensor's detachment and reinstallation to evaluate the system comprising the tactile sensor and the pretrained classifier.As shown in Figure S8 (Supporting information), data collected from different Braille numbers are separate distinguishably while data of the same number are clustered together.This implies that the wearing variability does not change the sensor's capability of distinguishing different textures.Besides, a training accuracy of 99.25% for S train , a test accuracy of 96% for S 1 test , and a test accuracy of 87.5% for S 2 test were achieved, respectively.Notably, a decline in classification accuracy is observed after the sensor's detachment and reinstallation.This is attributed to the alteration in data distribution, but this issue can be effectively addressed via just collecting a small scale of new data and fine-tuning the classifier. To validate the feasibility for practical applications, this smart robot hand equipped with the sensitive tactile sensor is applied to decode 11-digit phone numbers in real time.After printing a group of braille phone numbers, the robot hand is programmed to continuously perceive each number and provide real-time signal feedback, together with the display of recognition results on a graphical user interface (GUI) (Figure 5h,i).The GUI is designed for the number output after identification and output visualization (Figure 5j; Video S2, Supporting Information).With this capability, such the smart robotic hand is able to call emergency contact for people with disabilities as well as affords high potentials as prosthetics for rapid and accurate recognition just like human hands. Conclusion In summary, we introduced a TENG-based flexible tactile sensor that enables ML-assisted device design in output signal selection and fabrication parameter optimization.After setting the evaluation criteria, the parameter values of output signal, distribution density of electrodes as well as diverse surface carved microstructures are compared and selected according to the statistical analyses of six contact modalities.Based on the comprehensive evaluation of fabrication parameters and machine learning co-designed tactile performance, the classification accuracy of ≈99.58% is obtained, which is higher than that before parameter optimization (≈95.579%).Given the optimal tactile sensing performance, the tactile sensor is successfully applied for handwriting recognition of various English letters and sentences.Furthermore, via applying a customized CNN model to extract features and estimate the decision boundaries, the classification accuracy of ten braille numbers is achieved at 96.12%.To imitate human perceptual feedback, a smart robot hand assembled with sensors accomplishes the task of dynamic identification of an 11-digit braille phone number.This work provides guidances to purposively construct the sensor based on an inverse design strategy, which challenges the intuition-driven sensor design. Fabrication of LIG/PDMS-Based Tactile Sensor: The interdigital LIG electrodes were fabricated by laser patterning (power: 7.24 J cm −2 ) on a PI substrate utilizing an infrared (IR) laser system at a wavelength of 10.6 μm.Then, the LIG/PI film was uniformly spin-coated with PDMS precursors at a speed of 800 rpm min −1 for 2-3 times.After vacuum treatment and solidification at 90 °C for 30 min, the LIG patterns could be well transferred onto the PDMS surface (thickness: ≈180 μm).Next, the surface of LIG/PDMS was treated by oxygen plasma, followed by the alignment of a FEP film.Finally, the tactile sensor was obtained by encapsulating PDMS on the top of the FEP film. Characterizations: Surface morphology of porous LIG was characterized by a scanning electron microscope (SEM) (Hitachi, SU3500, Japan).A Raman spectrometer ((LabRAM Soleil, HORIBA, Japan) was applied to analyze the Raman spectra of LIG/PI and LIG/PDMS.A linear motor (DA60-B1-T60-C010-0.2, Dynamikwell Technology, China) was used to apply different stimuli for the characterization of triboelectric performance.The open circuit voltage and short circuit current were collected by an electrometer ((Keithley 6514, Tektronix, USA).For the data collection in two applications, the open circuit voltage was obtained via a portable Arduino board. Data Collection: Data collection was involved in two cases, including the ML-guided sensor design and recognition of braille numbers.For the former, participants were instructed to perform six dynamic touch modalities (100 times each).An electrometer was used to measure and record the induced signals in the current/voltage form, followed by a band-pass filter to reduce noise.As described in Figure S3 and Algorithm S1 (Supporting Information), an anchor sample for each touch modality was first selected, and a similarity matching algorithm was used to search for the remaining samples, resulting in 600 samples per class as a dataset.This process was repeated several times according to the iterations of fabrication parameter optimization.That is to say that an exclusive dataset was collected for each parameter.Considering that it was the data quality to be assessed, all the data in a dataset were used to train the SVM classifiers to obtain the average training accuracy without dividing them into training/test subsets. In the braille recognition application, a robot arm (UR5, Universal Robots) equipped with the sensor was used to collect data.Each braille number (0-9) was captured by a portable board (Arduino Uno) based on a sliding-touch process for 320 times, corresponding to 320 samples.Given that signals induced by braille bulges were expressed by small picks, a high-pass filter was used to eliminate the effects caused by proximity.The same similarity-matching algorithm was used to extract data samples to compose a dataset, with 280 samples for training and 40 for testing.To evaluate the effect of wearing variability, data were collected following the same setup and procedure.As a result, 400 samples (40 samples per Braille) were gathered as training data (denoted as dataset S train ) and 200 samples (20 samples per Braille) as test data (denoted as dataset S 1 test ).Subsequently, the sensor was detached from the robotic finger and reinstalled, and new data were gathered, giving rise to 200 samples (20 samples per Braille) as new test data (denoted as dataset S 2 test ).ML Models: The SVM classifiers employed in the pre-defined criteria were implemented in Python with a scikit-learn package.A total of ten classifiers were constructed, with a regularization parameter ranging from 0.1 to 1.0.A one-dimensional CNN model was developed in Python under the PyTorch framework.The CNN consisted of five convolution layers followed by batch normalization and rectified linear unit (ReLU) activation function each.The features encoded by the CNN were downsampled by an adaptive average pooling layer to a constant length before being fed into a multilayer perceptron classifier, which allowed for various-length inputs.This model was trained using the adaptive moment estimation (Adam) optimizer for 100 epochs with a learning rate of 0.001 and a batch size of 128. It was subsequently used for inference on the test set and real-time applications. Figure 1 . Figure 1.Schematic of the machine learning-assisted sensor design via fabrication parameters optimization. Figure 2 . Figure 2. Characterizations of the TENG-based tactile sensor.a) Fabrication procedures of the sensor.SEM of b) porous LIG and c) PDMS-embedded LIG.d) Raman results of LIG on PI and LIG on PDMS after transfer.e) Explosive view of the device.f) Working mechanism of the sensor.The triboelectric current of TENG under stimuli of g) different contact forces, h) frequencies and i) contact areas.j) The triboelectric current induced by various contact materials including tissue paper, carbon cloth, nitrile, copper, skin, and FEP.k) The wear-resisting property of the proposed triboelectric sensor under over 60 000 cycles of friction. ,b (Supporting Information), where voltage signals exhibit better repeatability with smaller deviations from the average signal (black dotted line in each subplot) while current signals are distributed irregularly and far from the cluster center.Further analysis in the twodimensional space (Figure S4c,d, Supporting Information) reveals that the voltage signals are better clustered with clearer boundaries among different touch modalities compared to the current signals.Besides, ten linear SVM classifiers were implemented to classify these data, resulting in an average accuracy of 94.278% and 95.579% for voltage and current signals, respectively.The separability and dispersion are then calculated using the criteria defined in the previous section, resulting in [3.636, 11.877] and [3.528, 11.267] for voltage and current signals, respectively.To eliminate the influence of different dimensions, we normalized the results by transforming [94.278%, 3.636, 11.877] and [95.579%, 3.528, 11.267] into [0, 1, 1] and, [1, 0, 0] respectively.This indicates that voltage signals receive a total criterion score of two, while current signals receive one.Therefore, voltage-type signals are a better choice as the sensor's output in the proposed design.Second, the distribution density of electrodes in the proposed design can affect the complexity of signals, which can be reflected by subtle peaks.To figure out the impact of electrode density on classification results, the tactile performance with sparse and dense distribution of electrodes is investigated.Clustering results in FigureS5(Supporting Information) reveal that data from arrangements with sparse electrode pairs were more distinguishable among different classes.Scores for the sparse and dense arrangements are calculated, resulting in [96.975%, 3.695, 14.759] and [95.958%, 3.528, 11.542], respectively.This illustrates that a Figure 3 . Figure 3. Optimization of fabrication parameters assisted by machine learning methods.a) Different shapes and sizes of microstructures on the top layer of PDMS surface.Scale bars: 100 μm.b) Calculation of the average classification accuracy by SVM classifiers.c) Calculation of the separability and dispersion of data after the dimensionality reduction.d) Schematic of six dynamic touch modalities.Dimensionality reduction results of the data collected from sensors with e) fingerprint-like and f) grid-like microstructures, respectively.Dimensionality reduction results of the data collected from sensors with microstructures of different sizes at g) 300, h) 200, and i) 100 μm, respectively.j) Output of the parameter-optimized sensor to six touch modalities. Figure 4 . Figure 4. Optimized tactile sensing performance in handwriting applications.The outputs in the form of a) open circuit voltage and b) short circuit current of the tactile sensor by sliding a finger at four directions.Insets are the enlarged peaks extracted from the original results in corresponding contact directions.c) Schematic of handwriting demonstration by attaching the device on fingers.d-h) Handwriting results of different English words and sentences based on the proposed tactile sensor. Figure 5 . Figure 5. Dynamic decoding feedback of muiti-point contact by assembling a tactile sensor on a robotic finger.a) Concept illustration of a recognition feedback system simulating finger perception.b) Close-loop flow chart of a robotic sensing system that mimics human skin perception.c) Schematic of braille numbers from 0 to 9. d) Real-time signals of ten braille numbers processed by a high-pass filter.e) Dimensionality-reduced clustering results of ten braille numbers.f) A 1D CNN model with 128 filters used for data classification.g) Classification results of ten braille numbers after 320 cycles of data acquisition.h,i) Real-time data acquisition process of an 11-digit braille phone number realized by a smart robotic finger.j) Real-time signal feedback and display of recognition results of the braille phone number on a graphical user interface. Table 1 . Summarization of the comparison results during the optimization process.
2023-09-24T06:16:01.597Z
2023-09-22T00:00:00.000
{ "year": 2023, "sha1": "a44d6e0c1cd23ac14a07dd2b671bbb689adea9e1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202303949", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "997bfa06777cf90f29c4ef85c31f1da5b1d78268", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
232382252
pes2o/s2orc
v3-fos-license
Interactions of Linear Analogues of Battacin with Negatively Charged Lipid Membranes The increasing resistance of bacteria to available antibiotics has stimulated the search for new antimicrobial compounds with less specific mechanisms of action. These include the ability to disrupt the structure of the cell membrane, which in turn leads to its damage. In this context, amphiphilic lipopeptides belong to the class of the compounds which may fulfill this requirement. In this paper, we describe two linear analogues of battacin with modified acyl chains to tune the balance between the hydrophilic and hydrophobic portion of lipopeptides. We demonstrate that both compounds display antimicrobial activity with the lowest values of minimum inhibitory concentrations found for Gram-positive pathogens. Therefore, their mechanism of action was evaluated on a molecular level using model lipid films mimicking the membrane of Gram-positive bacteria. The surface pressure measurements revealed that both lipopeptides show ability to bind and incorporate into the lipid monolayers, resulting in decreased ordering of lipids and membrane fluidization. Atomic force microscopy (AFM) imaging demonstrated that the exposure of the model bilayers to lipopeptides leads to a transition from the ordered gel phase to disordered liquid crystalline phase. This observation was confirmed by attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR) results, which revealed that lipopeptide action causes a substantial increase in the average tilt angle of lipid acyl chains with respect to the surface normal to compensate for lipopeptide insertion into the membrane. Moreover, the peptide moieties in both molecules do not adopt any well-defined secondary structure upon binding with the lipid membrane. It was also observed that a small difference in the structure of a lipophilic chain, altering the balance between hydrophobic and hydrophilic portion of the molecules, results in different insertion depth of the active compounds. Introduction The resistance of pathogens to available antibiotics is an increasingly serious problem in modern medicine [1]. This necessitates the search for new drugs which would exhibit different mechanisms of bactericidal activity compared with typical antibiotics acting on specific biochemical processes. A possible solution to this problem involves the use of compounds with less specific action, based, for example, on damaging the bacterial cell membrane. This condition is met by antimicrobial peptides targeting the bacterial cell membranes [2,3]. Most often, antimicrobial peptides are positively charged. As a result, they interact preferentially with negatively charged membranes of bacterial cells, and their amphiphilic structure allows them to insert into the core of the membrane [4]. Such a model of action is widely accepted since the bactericidal kinetics of antimicrobial peptides is often correlated with the depolarization of the cell membrane. This shows that the mode of action involves disruption of the membrane integrity. Possible mechanisms include pore formation or membrane solubilization in a detergent-like manner [5]. A similar mode of action can be expected for short lipopeptides containing an acyl chain usually coupled to the N-terminus of a peptide moiety, which includes 2-8 amino-acid residues [6]. The advantage of short lipopeptides lies in their simpler structure and, therefore, easier synthesis, as well as in the wide possibilities for modifying their structure in terms of the balance between the hydrophilic and hydrophobic parts, as well as the charge density and distribution. For that reason, lipopeptides seem to be suitable for drug design, and some of them are already approved for clinical use. Examples include daptomycin and polymyxins [7,8]. Battacin is an example of a cationic cyclic lipopeptide, and it is isolated from bacterium Paenibacillus tianmuensis [9]. Like other lipopeptides, it consists of two parts: a lipophilic chain containing 3-hydroxy-6-methyloctanoic acid attached to the peptide composed of eight D-and L-amino acids, where seven of them form a ring. Battacin has noncoded amino acid α,γ-diaminobutyric acid (Dab) in both D-and L-forms, which provide resistance to proteases. It was found to exhibit bactericidal activity against Gram-negative and Gram-positive bacteria, including multidrug-resistant and extremely drug-resistant clinical isolates. Unfortunately, in naturally occurring battacin, high efficacy against multidrugresistant bacterial strains is accompanied by nephro-and neurotoxicity, which eliminate its use as a clinical drug. To overcome this problem, numerous derivatives of battacin were designed and synthesized [10][11][12]. Among them, a promising class includes linear analogues, which were demonstrated to be active in terms of lysing bacteria and dispersing biofilms. Recently, the results of molecular dynamics simulations were reported by Chakraborty and coworkers, which shed some light on the possible action mechanism of battacin analogues [13]. It was demonstrated that the activity of linear analogues of battacin depends on the balance between the positively charged and hydrophobic moieties. It was found that the hydrocarbon chain of the lipidated N-terminal residue and the hydrophobic amino-acid residues, i.e., D-Phe and Leu, insert into the membrane core and anchor the lipopeptide to the membrane. The presence of Dab residues improves membrane binding through electrostatic interactions and increased hydrogen bond formation. The interesting feature of these compounds is that, unlike typical antimicrobial peptides, their activity is not based on the presence of a specific secondary structure when bound to a lipid membrane. Hence, the mechanism of their membranolytic action may differ from those observed for antimicrobial peptides. In this paper, we characterized two linear analogues of battacin with a peptide moiety containing the same sequence of amino acids as natural battacin, but the lipophilic chain composed of 3-hydroxy-6-methyloctanoyl was replaced either with a linear decanoyl chain (LC10-OP) or with a branched 4-methylnonanoyl chain (BC10-OP). These lipophilic chains were chosen to modulate the balance between the hydrophobic and hydrophilic portion of lipopeptides, which may affect their ability to insert into the lipid membrane. As demonstrated by Neubauer and coworkers, acyl chain branching in short lipopeptides makes them more hydrophilic compared with the analogues possessing the same number of carbon atoms [14]. Moreover, the same authors observed that short lipopeptides with a branched fatty acid chain cause distinctly lower hemolysis compared with the reference lipopeptides with similar hydrophobicity or the same number of carbon atoms in a linear hydrocarbon chain. The chemical structures of the lipopeptides studied in this work are shown in Scheme 1. The lipopeptides were tested in terms of their antimicrobial activity, and the mechanism of their action was evaluated on a molecular level using model lipid films, i.e., Langmuir monolayers and solid-supported lipid bilayers. The physicochemical characterization of lipopeptide-membrane interactions was performed using surface-sensitive techniques including surface pressure measurements, atomic force microscopy (AFM), and attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR). These methods enabled evaluation of the lipopeptide-induced changes in the structure of a model lipid membrane. Synthesis of Lipopeptides The LC10-OP and BC10-OP lipopeptides were synthesized using the well-established standard Fmoc/t-Bu methodology used for solid-phase peptide synthesis [15]. An analogous protocol of lipopeptide synthesis was recently reported by our group [16]. Rink Amide AM resin (substitution level of 0.55 mmol/g) and Fmoc-protected amino-acid building blocks in a standard Fmoc-Xaa-OH/TBTU/DIPEA protocol (2 eq/2 eq/4 eq) were used. The same protocol was used for the fatty-acid coupling to the N-termini of the peptide moiety. The final lipopeptide was cleaved from the resin using Reagent B (trifluoroacetic acid/phenol/H 2 O/triisopropyl silane 88:5:5:2; v/v/v/v). The final compounds were purified using reverse-phase HPLC (Shimadzu Prominence system) and C18 Luna column (Phenomenex, 150 mm× 10 mm, 5 µm) with a linear gradient of H 2 O-acetonitrile-0.1% TFA, and the relevant fractions were lyophilized. The purity of LC10-OP and BC10-OP was assessed from analytical HPLC, and it was higher than 95%. The retention times for LC10-OP and BC10-OP were 26.4 min and 25.7 min, respectively. Because the lipopeptides were to be subjected to biological tests, it was necessary to replace the trifluoroacetate counterion with the chloride counterion. The lipopeptides were analyzed by mass spectrometry and FTIR. High-resolution electrospray ionization mass spectrometry ( Biological Activity The bacterial strains were acquired from the Polish Collection of Microorganisms (PCM) or American Type Culture Collection (ATCC). Gram-positive strains were as follows: Staphylococcus aureus ATCC 29213, Staphylococcus epidermidis ATCC 12228, and Enterococcus faecalis ATCC 14506. Gram-negative strains were as follows: Pseudomonas aeruginosa PAO1 PCM 499, Klebsiella pneumoniae PCM 1, and Yersinia enterocolitica PCM 2081. The strains were grown in lysogeny broth (LB). Single-colony material was transferred to 10 mL of LB medium and grown at 30 • C with shaking overnight. The optical density of the overnight cultures (at 600 nm) was then adjusted to 0.05 by dilution with a fresh portion of LB medium. The lipopeptides were dissolved in water. Serial dilutions of compounds were prepared in LB medium ranging from 5 to 50 mg/L (final concentrations). The experiment was performed by adding 100 µL of each of the prepared dilutions to 100 µL of the diluted overnight culture of bacteria in the wells of a 96-well microtiter plate. The minimum inhibitory concentration (MIC) was defined as the lowest concentration of test compound needed to inhibit bacterial growth when evaluated after 24 h incubation at 30 • C with shaking (final optical density at 600 nm not greater than 0.05). Optical density measurements were performed using a TECAN Sunrise plate reader. Data were obtained from three independent experiments. Surface Pressure Measurements Lipid monolayers at the air/buffer interface were formed using a KSV NIMA Langmuir trough (Biolin Scientific, Sweden) equipped with two movable hydrophilic barriers. A Wilhelmy plate made of filter paper was used to measure the surface pressure. Before each experiment, the barriers and the trough were cleaned with a chloroform/methanol mixture and then Milli-Q water. Lipopeptides LC10-OP and BC10-OP were dissolved in water. POPG was dissolved in a chloroform, DPPG was dissolved in a chloroform/methanol (65:35, v/v) mixture, and CL was dissolved in a chloroform/methanol (4:1, v/v) mixture. After preparation of lipid solutions, the stock solution of the DPPG/POPG/CL (1:1:2 mol/mol/mol) mixture was prepared with final concentration of 1 mg/mL. All lipid monolayers were formed on an aqueous solution of 0.01 M phosphate buffer saline (pH 7.4) either without or with addition of lipopeptides. The lipid mixture was applied onto the subphase using a Hamilton syringe (50 µL). Spreading solutions were left on buffer subphase for 10 min to complete solvent evaporation. Monolayers were compressed at the barriers speed of 10 mm/min and at constant temperature 21 ± 1 • C to record the surface pressure versus molecular area isotherms. All measurements were repeated at least three times to ensure reproducible results. Preparation of Small Unilamellar Vesicles Stock solutions containing ∼5.0 mg/mL of the desired lipids were prepared in a similar way as for surface pressure measurements. Then the solutions were mixed in a test tube at the desired molar ratio (DPPG/POPG/CL 1:1:2). Next, the solvent was evaporated by vortexing under a stream of nitrogen, and the test tube containing dried lipid cake was placed in a vacuum desiccator for 1 h to remove the residues of solvent. Then, 1.0 mL of an aqueous solution of 0.01 M PBS was added to the lipid cake, and the mixture was sonicated at ∼40 • C for 1 h. For infrared measurements, D 2 O was used for buffer preparation. After the sonication step, the suspension of lipid vesicles was homogeneous and transparent. Atomic Force Microscopy Atomic force microscopy (AFM) experiments were carried out with Dimension Icon (Bruker) in PeakForce Tapping Mode with simultaneously recorded nanomechanical data including Young's modulus. ScanAsyst Fluid probes (Bruker, nominal spring constant 0.7 N/m) were used for imaging. The exact value of the spring constant was obtained by thermal tune method before each experiment. Lipid bilayers were deposited on freshly cleaved mica substrates by vesicle spreading. The samples were imaged under in situ conditions in an aqueous 0.01 M PBS solution at the temperature of 21 ± 1 • C. Attenuated Total Reflection Fourier-Transform Infrared Spectroscopy All spectra were collected using a Nicolet iS50 infrared spectrometer (Thermo Fisher Scientific Inc.) equipped with a liquid-nitrogen-cooled MCT-A detector and custom-made single-reflection accessory. The incident angle was set at 55 • , and the spectral resolution was 4 cm −1 . The spectra are presented in absorbance units A = log(I 0 /I), where I 0 and I correspond to the single-beam intensities of IR radiation observed for the reference and the sample, respectively. In all experiments, we used a silicon hemispherical prism. The bilayer was deposited on the planar surface of the prism by spreading of lipid vesicles. The refractive indices used for the molecular orientation calculations were 3.42 for Si, 1.45 for lipid bilayer, and 1.32 for D 2 O. Data processing was performed with Omnic 9 software (Thermo Fisher Scientific Inc., Waltham, MA, USA). Results and Discussion Lipopeptides LC10-OP and BC10-OP were first tested for their potential antimicrobial activity. For that purpose, we determined their minimum inhibitory concentrations (MICs), which is the lowest concentration of a compound which prevents visible growth of bacteria. The values of MIC for both lipopeptides against selected strains of bacteria are shown in Table 1. The results of the measurements summarized in Table 1 show that both lipopeptides exhibited variable activity against the tested bacterial strains. Their activity against Gramnegative strains Y. enterocolitica and K. pneumoniae was found to be relatively low. The same applied to Gram-positive E. faecalis. Noticeably higher activity was observed against Gram-negative P. aeruginosa. However, the lowest values of MIC and, hence, the highest activity were found for Gram-positive S. aureus and S. epidermidis, which may suggest a certain degree of selectivity of the lipopeptides against these strains. Because the tested lipopeptides are amphiphilic in nature, it can be assumed that their antimicrobial activity is based on the interaction with the bacterial cell membrane. To verify this hypothesis, a physicochemical characterization of lipopeptides interactions with lipid membranes was performed. Due to their high activity against S. aureus and S. epidermidis, the composition of the model films was selected to mimic the lipid composition of the cell membranes of Gram-positive bacteria [17]. Initially, the effect of LC10-OP and BC10-OP lipopeptides on model bacterial lipid membranes was studied using the Langmuir technique. As a model, we utilized a negatively charged lipid membrane composed of DPPG/POPG/CL (1:1:2). Lipopeptides were dissolved in an aqueous 0.01 M PBS subphase, and their final concentration was varied from 0.1 µM to 1 µM. The results are shown in Figure 1. Initially, the lipid monolayers were compressed on PBS subphase without lipopeptides. The surface pressure (Π) vs. area per molecule (A) isotherms of DPPG/POPG/CL monolayer display the lift-off at molecular areas of~150 Å 2 , and, at the surface pressure of~18 mN/m, the phase transition from a liquid-expanded to a liquid-condensed phase appeared. The partial collapse of the monolayer occurred at~56 mN/m. This is related to the collapse of POPG molecules, which are squeezed out from the monolayer [18]. The removal of POPG increases the condensation of the monolayer, and the mixture of DPPG and cardiolipin is further compressed up tõ 70 mN/m, where the second collapse occurs. Because the aim of the experiment was to examine the effect of lipopeptides on a three-component monolayer, the data recorded above the POPG collapse were not analyzed. The introduction of lipopeptides into the subphase shifted the DPPG/POPG/CL isotherm toward larger molecular areas, and the effect was noticeable even at the lowest concentration of lipopeptides. Such behavior indicates that lipopeptides were incorporated into the DPPG/POPG/CL monolayer, and the effect was better pronounced at the beginning of the monolayer compression. However, the lift-off of the isotherms recorded on the subphase containing BC10-OP started at a larger molecular area compared with LC10-OP, which shows that, at low surface pressure, BC10-OP was incorporated more easily into DPPG/POPG/CL membrane. A more quantitative analysis of lipopeptide incorporation into the lipid monolayers can be performed on the basis of limiting molecular area values determined from the Π-A isotherms. The results collected in Table 2 clearly show that the area per molecule grew with the increasing concentration of the lipopeptide in the subphase, which can be ascribed to the increasing number of molecules incorporated from the subphase [19,20]. The general trend was the same for both compounds, but the increase in the area per molecule was significantly greater for BC10-OP. For example, the values of the limiting molecular area determined for monolayers compressed on buffer with 1 µM concentration of lipopeptides were found to be~142 Å and~181 Å for LC10-OP and BC10-OP, respectively. This demonstrates the enhanced ability of BC10-OP to be incorporated into the lipid monolayer compared to LC10-OP. Another important aspect of the properties of lipid monolayers can be obtained from the value of the compression modulus (C s −1 ), which is defined as follows [21]: where Π is a surface pressure and A is the area per molecule. This parameter provides information on the state in which the monolayer exists at a given surface pressure. It is widely accepted that a compression modulus in the range of 12.5-100 mN/m corresponds to the liquid expanded state, 100-250 mN/m is characteristic of a liquid condensed state, and values above 250 mN/m are indicative of a solid state. The maximum value of compression modulus for the DPPG/POPG/CL monolayer compressed on pure buffer subphase was 139 mN/m, which means that the monolayer was in a liquid-condensed state (see Figure 1). The addition of lipopeptides at the lowest concentration had no significant effect on the value of the compression modulus. However, at the concentration of 0.5 µM, a significant drop was already observed; finally, at the lipopeptide concentration of 1 µM, the values of C s −1 were found to be 70 mN/m and 54 mN/m for LC10-OP and BC10-OP, respectively. This demonstrates that the monolayers existed in a liquid expanded state. Hence, the incorporation of lipopeptides decreased the molecular packing density within the lipid film and caused monolayer fluidization. However, the fluidizing effect of BC10-OP was stronger compared with LC10-OP as can be deduced from the data collected in Table 2. Since, under biological conditions, antimicrobial substances interact with already existing cell membranes, we investigated the effect of lipopeptides on monolayers preformed at the air-buffer interface. For that purpose, the lipid monolayers were compressed to 35 mN/m. This value of the surface pressure was chosen to achieve the structural organization of the lipid film resembling that in natural cell membranes. After monolayer compression, the position of the barriers of the Langmuir trough was fixed to maintain a constant area occupied by the lipid film. Furthermore, a stock solution of either LC10-OP or BC10-OP lipopeptide was injected into the subphase under the film to reach a final concentration of 1 µM. Then, the changes in the surface pressure were monitored as a function of time (see Figure 2). The surface pressure of the DPPG/POPG/CL monolayer measured in the absence of lipopeptides decreased slightly over time, which may be related to the partial solubility of the lipids in the subphase. Additionally, all lipid components were negatively charged, and the monolayer at 35 mN/m existed in a densely packed and ordered state; hence, the repulsive forces between polar heads may have contributed to the expulsion of some molecules into the bulk of the subphase solution. The injection of lipopeptides resulted in a rapid increase in surface pressure. An analysis of the slope of the curves during the first few minutes upon injection revealed that, initially, the kinetics of binding was similar for both lipopeptides. This is reasonable since, at the initial stage, the interactions were mostly driven by electrostatic attractions between the positively charged peptide moiety, which was the same in both lipopeptides, and the negatively charged lipid polar headgroups. Nevertheless, the analysis of the curves after longer times demonstrated that the lipopeptides showed noticeable differences in behavior. In the presence of LC10-OP, after the initial step of binding to the lipid film, the surface pressure curve reached a maximum and then started to decline gently. For BC10-OP, the surface pressure gradually increased until it reached a quasi-equilibrium of about 45 mN/m. This observed behavior may have been the result of slight differences in the action of lipopeptides. We can assume that LC10-OP electrostatically interacts with negatively charged lipids, but this interaction is not counterbalanced by hydrophobic interactions driving the lipopeptide insertion. Therefore, a substantial fraction of the lipopeptide molecules remains in the region of the polar heads. This prevents further accumulation of the lipopeptides and only a small fraction anchors the lipophilic tail in the monolayer. Electrostatic interactions occur also between BC10-OP and lipid polar heads; however, in this case, the barrier for the reorientation and incorporation of the lipopeptide into the lipid film is smaller, such that a larger fraction of molecules can insert between the lipid chains. Although Langmuir monolayers are widely accepted model systems, they do not reproduce the bilayer architecture of cell membranes. Therefore, the membranolytic properties of lipopeptides were further investigated with solid-supported lipid bilayers [22]. Such bilayers are certainly a better model of the natural cell membranes. Nevertheless, they also have some limitations due to the interaction of lipid molecules with the substrate, which may, for example, affect the hydration of the polar heads in the bottom leaflet. This effect is often minimized using hydrophilic substrates such as mica, glass, or quartz. To evaluate the changes in topography and thickness of the DPPG/POPG/CL bilayer upon exposure to lipopeptides, we performed AFM experiments. This technique enables mesoscale imaging of the surface structures under in situ conditions; hence, it is possible to follow the dynamics of numerous surface-related processes [23,24]. The bilayers were deposited on the mica surface by spreading small unilamellar vesicles. The AFM images of the resulting DPPG/POPG/CL bilayer before and after exposure to lipopeptides are illustrated in Figure 3. The bilayers were analyzed in terms of the lipopeptide-induced changes in their topography and thickness. The latter was determined on the basis of cross-sectional profiles taken along the defect sites, and it was calculated as the average height difference between the bare substrate and the region covered by the lipid membrane. This approach is characterized by simplicity, but it should be noted that the obtained thickness of the lipid layers may be slightly underestimated due to the elastic deformation of the membrane under the load of the AFM probe. Thus, the obtained thickness may be slightly lower compared with equilibrium conditions. As demonstrated in Figure 3A, the average thickness of the intact DPPG/POPG/CL membrane was found to be 5.0 ± 0.2 nm. As a function of the value of the Young's modulus, which was determined to be~29 MPa, it can be concluded that the bilayer existed mostly in the gel (L β ) phase [25]. After the injection of the lipopeptides, the morphology of the films changed noticeably. In both cases, the effect of the membrane thinning was observed, and the Young's modulus determined in topographically lower regions was~16 MPa. This reflects a lipopeptide-induced disordering effect and the transition from the gel L β phase to liquid crystalline L α phase [25]. After approximately 30 min of exposure, both phases coexisted; however, the L β phase domains were reduced to 10-20% of the scanned area. The thickness of the DPPG/POPG/CL bilayer in the L α phase region was determined to be 4.0 ± 0.3 nm and 3.9 ± 0.4 nm for LC10-OP and BC10-OP, respectively. The AFM data indicate clearly that the interactions of lipopeptides with the supported lipid bilayer resulted in a decreased ordering of the lipids and led to the fluidization of the membrane. A similar effect was recently reported by our group for short amphiphilic lipopeptides with a general structure of C n -fXXL, where n = 12, 14, or 16 and X denotes the Dab residue [16]. The reduction in bilayer thickness can be explained in terms of Israelachvili s concept of the critical packing parameter (cpp) [26][27][28]. This parameter is defined as the ratio between the hydrocarbon tail effective area and the projection area of the polar peptide headgroup. For lipid bilayers, the values of cpp are usually between 1/2 and 1. By using the additivity of the cpp, the weighted average value determined for the DPPG/POPG/CL bilayer was close to unity. The binding and partitioning of the lipopeptides is expected to lower the additive value of cpp. This is related to the conical shape of lipopeptides resulting from the large size of the polar head (i.e., the value of cpp for lipopeptides is expected to be ∼1/3). In order to compensate for the presence of the large polar heads, lipid molecules increase their tilt angle with respect to the surface normal, and the intermolecular distances are also increased. Consequently, the bilayer accommodating the lipopeptides becomes thinner and more fluid-like. However, due to the structural variation of the lipophilic chains in lipopeptides (i.e., linear in LC10-OP vs. branched in BC10-OP), the exact titling of lipid molecules might be slightly different. Quantitative information regarding the structure of the bilayers before and after lipopeptides binding was obtained from ATR-FTIR measurements. This enabled the analysis of the orientation and ordering of lipid molecules within the membrane. The orientation of the molecules assembled on planar surface of the Si prism can be determined from polarized ATR-FTIR spectra since the intensity of the IR band depends on the angle between the vectors of the transition dipole moment of given vibration and the electric field of the incident beam. To determine the molecular orientation, one needs to obtain the information about the direction and the amplitude of the electric field at the interface. To control the direction of the electric field, linearly polarized light can be used. For the penetration depth of the evanescent wave greatly exceeding the thickness of the lipidic assembly, a thin film approximation can be applied, and the amplitudes of spatial components of the electric field (E x , E y , E z ) may be calculated using following equations [29]: where θ 1 denotes the angle of incidence of IR beam at the solid-liquid interface; n 32 = n 3 /n 1 and n 32 = n 3 /n 2 , where n 1 , n 2 , and n 3 are refractive indices of the internal reflection element (Si prism), thin film (lipid membrane), and bulk medium (aqueous solution), respectively. In the case of the lipid bilayer deposited on the planar surface of an Si prism, the dichroic ratios (R) can be determined experimentally as the ratio of the absorbance of p-polarized and s-polarized light. Once the dichroic ratio and the electric field amplitudes of the evanescent wave are known, it is possible to calculate the order parameter (S dipole ) and orientation angle (θ dipole ) with respect to the surface normal for given transition dipole moment [30,31]. If the structure of the molecules forming the film is well defined, there is a strict relationship between the direction of the transition dipole moment of given vibration and the molecular axis. In the case of lipids, the transition dipole moments of υ s (CH 2 ) and υ as (CH 2 ) are oriented perpendicular (α = 90 • ) to the molecular axis defined by trans segments of hydrocarbon chains. Hence, using these bands, it is possible to estimate the average tilt angle of the acyl chains (θ chain ) with respect to the surface normal. Under the experimental conditions used in this work, the penetration depth of the evanescent wave at the wavelength corresponding to the C-H stretching region was expected to be~0.20 µm, while the thickness of the bilayer was~5.0 nm. This means that the thin film approximation could be safely applied to determine molecular orientation. The successful formation of the DPPG/POPG/CL lipid membrane on the planar surface of the hemispherical Si prism was confirmed by ATR-FTIR spectra. An increase in positive bands in the C-H stretching region was observed in time, and, after approximately 60-90 min, the intensity of the bands did not change. The spectra shown in Figure 4 were recorded after approximately 90 min of lipid film deposition upon gently washing the cell with pure buffer to remove excess liposomes. The spectrum of the bare Si prism recorded in 0.01 M PBS was used as a reference. The position of υ as (CH 2 ) and υ s (CH 2 ) bands enables drawing a conclusion about the physical state and packing density of the acyl chains in lipid membrane. The frequencies lower than~2920 cm −1 for the υ as (CH 2 ) band and lower than~2850 cm −1 for the υ s (CH 2 ) band are characteristic for the ordered gel state of a bilayer with fully stretched acyl chains [31,32]. Higher frequencies of υ as (CH 2 ) and υ s (CH 2 ) bands are indicative of an increasing number of gauche defects and disordering. Hence, for the disordered liquid crystalline state, the υ as (CH 2 ) and υ s (CH 2 ) bands can be shifted up to~2924 cm −1 and~2853 cm −1 , respectively. As shown in Figure 4, the υ as (CH 2 ) band was located at 2917 cm −1 , and the υ s (CH 2 ) band appeared at 2849 cm −1 , which proved that the acyl chains were ordered and the membrane was in a gel state. Since the spectra were recorded for both p-and s-polarization, it was possible to calculate the dichroic ratio, which was found to be 0.96 (±0.02) for the υ s (CH 2 ) band. The resulting value of the order parameter for the acyl chain (S chain ) was 0.78 (±0.03), which was also indicative of the gel state. The average tilt angle of the acyl chains with respect to the surface normal (θ chain ) was determined to be 22 Furthermore, the DPPG/POPG/CL bilayers were exposed to lipopeptides. In this case, the spectra of the intact lipid bilayers in the absence of lipopeptides were used as the reference, and the changes in absorption were again monitored in time. The binding of the lipopeptides to lipid bilayers caused the emergence of absorption bands within the (C-H) stretching region, and their intensity gradually increased up to 30-45 min before achieving the steady state. Figure 5 shows the resulting spectra in the C-H stretching region recorded after~45 min of exposure. In a control experiment without the lipid membrane, we found that the contribution of hydrocarbon chains from lipopeptides to the absorption spectra was negligible. Therefore, the observed growth of the intensity of (C-H) stretching bands could be interpreted as an increase in the tilt angle of acyl chains with respect to the surface normal. The direction of the (C-H) stretching vibrations was perpendicular to the axis of the acyl chain; hence, the increased tilt angle of lipid molecules caused the enlargement of the transition dipole moment component parallel to the surface normal. Interestingly, the υ as (CH 2 ) and υ s (CH 2 ) bands were located at~2923 cm −1 and~2853 cm −1 , demonstrating that the ordering of lipids was affected by the presence of lipopeptides, and the bilayers existed in a disordered liquid crystalline state. To obtain the quantitative information on the orientation of lipids upon exposure to lipopeptides, we performed more detailed analysis of the pand s-polarized spectra. The relevant parameters including dichroic ratios (R), order parameters (S chain ), and tilt angles (θ chain ), extracted from the p-and s-polarized spectra of DPPG/POPG/CL bilayers before and after lipopeptide binding, are collected in Table 3. In the case of the bilayer exposed to LC10-OP, the order parameter was 0.50 (±0.06) and the acyl chain tilt angle with respect to the surface normal was 35 • (±2 • ). The values of order parameter and the tilt angle determined for the bilayer exposed to BC10-OP were 0.43 (±0.04) and 38 • (±1 • ), respectively. Hence, in both cases, the order parameters decreased and the tilt angles with respect the surface normal increased compared with the intact DPPG/POPG/CL bilayer. These results show that lipopeptide binding decreased ordering of lipid molecules within the membrane, and the latter became more fluid. This was related to the substantial change in tilt angle of acyl chains from~22 • for the intact bilayer to~35-38 • after lipopeptide binding. Simple geometrical considerations led to the conclusion that such a change in tilt angle would result in~0.6 nm thinning of the membrane. According to AFM data, the membrane thinning was found to be~1.0 nm; however, it should be noted that tip-sample interaction during AFM imaging results in elastic deformation of the soft film [33]. Consequently, the bilayer is slightly compressed under the tip load, which in turn gives underestimated values of the thickness. Simultaneously with the increase in (C-H) bands, the emergence of a broad υ(C=O) band from a lipid ester bond was observed, accompanied by amide I' and amide II' bands (see Figure 6). The presence of the ester υ(C=O) band suggests that the change in tilt angle of lipid molecules indeed occurred after lipopeptide binding. However, there was a slight difference in the position of the global maximum of this band. Specifically, upon binding with LC10-OP, this maximum was located at~1733 cm −1 , while, after binding BC10-OP, the maximum was found at~1742 cm −1 . The position of the ester υ(C=O) band is known to be sensitive to the extent of hydrogen bonding and hydration of the polar head region of lipid membrane [34]. In the case of phosphatidylglycerols and cardiolipins, there are usually two components of the ester carbonyl band centered at~1742 cm −1 and 1728 cm −1 , corresponding to dehydrated and hydrated carbonyl groups, respectively [35]. Hence, the higher frequency observed for the ester υ(C=O) band after binding of BC10-OP may indicate that the lipopeptide provided a less hydrated or less polar environment for the carbonyls compared with LC10-OP. It should be noted that the carbonyl group was in the interfacial region between the hydrophilic and hydrophobic parts of the lipid molecule. Therefore, the differences in hydration or polarity of the environment surrounding the carbonyl groups may reflect different depths of lipopeptide insertion. Specifically, the lipophilic part of BC10-OP may penetrate deeper into the hydrophobic core of the membrane. Such an interpretation seems to be reasonable if we consider that binding of BC10-OP resulted in a lower value of the order parameter and chain tilting was higher compared with LC10-OP. Further differences in the behavior of lipopeptides after binding with the lipid bilayer became apparent during the analysis of amide bands (see Figure 6). The position of the amide I' band is sensitive to the conformation of the peptide chain [36]. For LC10-OP, the global maximum of the amide I' band occurred at~1649 cm −1 , while, for BC10-OP, it was observed at~1647 cm −1 ; however, there were visible shoulders at 1660 cm −1 . These values were indicative of an irregular and unordered structure of peptide moieties with a plausible contribution from β-turns. The presence of amide II' bands located at~1458 cm −1 and 1450 cm −1 demonstrates that deuteration of amide NH groups occurred [36]. Since the peptide moieties did not adopt any well-defined secondary structure, it was difficult to determine their orientation upon binding with lipid bilayer. However, the average tilt angles of the amide C=O transition dipole moment with respect to the surface normal were found to be 79 • (±4 • ) and 65 • (±3 • ) for LC10-OP and BC10-OP, respectively. Considering this divergence in conjunction with the differences in the hydration of the lipid ester group, this may indicate that the plane of amide bonds in LC10-OP was almost parallel to the plane of the bilayer, while, in BC10-OP, the peptide moiety either adopted a slightly more tilted orientation or its molecular axis was rotated, enabling deeper insertion of the lipopeptide into the membrane. Such an interpretation is in line with the results of surface pressure measurements, where more efficient insertion was observed for BC10-OP. Conclusions We demonstrated that both LC10-OP and BC10-OP display antimicrobial activity with the lowest values of minimum inhibitory concentrations found for Gram-positive S. aureus and S. epidermidis. Due to the amphipathic nature of the lipopeptides, the most probable target of their antimicrobial action is the cell membrane. Therefore, the mechanism of their action was evaluated on a molecular level using model lipid films composed of DPPG/POPG/CL mimicking the membrane of Gram-positive bacteria. The surface pressure measurements revealed that both lipopeptides showed the ability to bind and incorporate into the lipid monolayers. As a result, the limiting molecular area was substantially increased and the changes in compression modulus proved membrane fluidization. The same effect was observed for the DPPG/POPG/CL bilayer supported on a solid substrate. As can be concluded from AFM data, the exposure of the bilayer to lipopeptides led to a transition from the ordered gel phase to disordered liquid crystalline phase, which was manifested by~1.0 nm thinning of the membrane. This observation correlates with the spectroscopic results. Quantitative analysis using ATR-FTIR measurements revealed that lipopeptide binding caused a substantial increase in the average tilt angle of lipid acyl chains with respect to the surface normal. This angle was changed from~22 • for the intact DPPG/POPG/CL bilayer to~35 • for LC10-OP and~38 • for BC10-OP. Spectroscopic results demonstrated also that peptide moieties in both molecules did not adopt any welldefined secondary structure upon binding with the lipid membrane. Interestingly, the lipid films were noticeably more affected by BC10-OP, which seemed to insert deeper into the DPPG/POPG/CL membrane compared with LC10-OP. Since the peptide motifs are the same in both lipopeptides, the observed effect can be ascribed to the small difference in structure of the lipophilic chain, which alters the balance between the hydrophobic and hydrophilic portions of the molecules.
2021-03-29T05:19:31.790Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "deb98a95bd7f58e4459e506e2ee1ad5d25288fe5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0375/11/3/192/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "deb98a95bd7f58e4459e506e2ee1ad5d25288fe5", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
246082144
pes2o/s2orc
v3-fos-license
The challenge of preventing and containing outbreaks of multidrug-resistant organisms and Candida auris during the coronavirus disease 2019 pandemic: report of a carbapenem-resistant Acinetobacter baumannii outbreak and a systematic review of the literature Background Despite the adoption of strict infection prevention and control measures, many hospitals have reported outbreaks of multidrug-resistant organisms (MDRO) during the Coronavirus 2019 (COVID-19) pandemic. Following an outbreak of carbapenem-resistant Acinetobacter baumannii (CRAB) in our institution, we sought to systematically analyse characteristics of MDRO outbreaks in times of COVID-19, focussing on contributing factors and specific challenges in controlling these outbreaks. Methods We describe results of our own CRAB outbreak investigation and performed a systematic literature review for MDRO (including Candida auris) outbreaks which occurred during the COVID-19 pandemic (between December 2019 and March 2021). Search terms were related to pathogens/resistance mechanisms AND COVID-19. We summarized outbreak characteristics in a narrative synthesis and contrasted contributing factors with implemented control measures. Results The CRAB outbreak occurred in our intensive care units between September and December 2020 and comprised 10 patients (thereof seven with COVID-19) within two distinct genetic clusters (both ST2 carrying OXA-23). Both clusters presumably originated from COVID-19 patients transferred from the Balkans. Including our outbreak, we identified 17 reports, mostly caused by Candida auris (n = 6) or CRAB (n = 5), with an overall patient mortality of 35% (68/193). All outbreaks involved intensive care settings. Non-adherence to personal protective equipment (PPE) or hand hygiene (n = 11), PPE shortage (n = 8) and high antibiotic use (n = 8) were most commonly reported as contributing factors, followed by environmental contamination (n = 7), prolonged critical illness (n = 7) and lack of trained HCW (n = 7). Implemented measures mainly focussed on PPE/hand hygiene audits (n = 9), environmental cleaning/disinfection (n = 9) and enhanced patient screening (n = 8). Comparing potentially modifiable risk factors and control measures, we found the largest discrepancies in the areas of PPE shortage (risk factor in 8 studies, addressed in 2 studies) and patient overcrowding (risk factor in 5 studies, addressed in 0 studies). Conclusions Reported MDRO outbreaks during the COVID-19 pandemic were most often caused by CRAB (including our outbreak) and C. auris. Inadequate PPE/hand hygiene adherence, PPE shortage, and high antibiotic use were the most commonly reported potentially modifiable factors contributing to the outbreaks. These findings should be considered for the prevention of MDRO outbreaks during future COVID-19 waves. Supplementary Information The online version contains supplementary material available at 10.1186/s13756-022-01052-8. Introduction Multidrug-resistant organisms (MDRO) pose an ever increasing threat on health-care systems around the world. Inadequate hand hygiene, high-level use of broad-spectrum antibiotics, patient comorbidities, and use of medical devices are known risk factors for colonization or infection with MDRO [1,2]. The notion of both inciting and moderating effects of the COVID-19 pandemic on the incidence of MDRO has been suggested [3,4]. Factors which might facilitate the spread of MDRO include altered characteristics of health-care worker (HCW)-patient contact, inadequate adherence to PPE, low staffing, breaches in environmental cleaning and inadequate antibiotic use [5][6][7]. On the other hand, increased awareness regarding hand hygiene and other infection control measures during the pandemic have been reported, which might result in a reduction in spread of these pathogens, particularly in settings with low incidence of COVID-19 [8]. During the second wave of the pandemic (September-December 2020), we experienced an outbreak of a carbapenem-resistant Acinetobacter baumannii (CRAB) on the intensive care units (ICU) of our tertiary care centre. CRAB is a globally emerging healthcarerelated pathogen, which is endemic in many countries of Southern Europe [9]. In Switzerland, only sporadic A. baumannii infections and outbreaks have been reported, mostly as a result of importation from highrisk countries [10,11]. Here, sharing our own experience and those of others, we aimed at describing features of MDRO (including C. auris) outbreaks during the COVID-19 pandemic, and at characterizing challenges in outbreak prevention and management specific to the exceptional conditions faced during the pandemic. Study design and ethics This is an outbreak report and a systematic review of the literature concerning MDRO outbreaks during the COVID-19 pandemic. The outbreak is reported according to the ORION statement [12], the review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, where applicable [13]. The review protocol was registered on Prospero (Registration number: 259474) [14]. Outbreak investigation We retrospectively studied a CRAB outbreak that involved the medical and surgical ICU in our 700-bed tertiary hospital in Eastern Switzerland. Both COVID-19 and non-COVID-19 patients were treated on the two affected ICUs. Due to the high number of patients during the pandemic, bed capacities were extended from 12 to 16, and from 24 to 32 beds for the medical and surgical ICU, respectively. Cases were defined as patients with CRAB isolation in any kind of sample, including clinical and screening specimens. Patient characteristics including hospital mortality were collected from chart reviews; the impact of CRAB infection (non-related, partially related, and directly attributable) on in-hospital mortality was assessed. The outbreak investigation was led by a multidisciplinary infection control team (physicians, nurses, hospital epidemiologists) and consisted of contact screenings (defined as sharing the room with a CRAB patient for ≥ 24 h), weekly cross-sectional screening of all ICU patients, routine screenings at day 5 and 10 after ICU discharge for all patients with a stay longer than 48 h on either ICU, and sampling of medical equipment, patient environment, and sinks. Patient screening sites included rectum, skin (axilla and groin), urine (if catheter in situ), and the respiratory tract (if ventilated or in case of tracheostomy) [15]. The further containment We considered studies from peer-reviewed journals and pre-prints, regardless of interventional or observational design, such as cohort and case-control studies, outbreak reports, case-series, research letters or editorials, and epidemiologic surveys. Conference abstracts and studies strictly reporting laboratory and no clinical data were excluded. Screening of records Initial screening by titles and abstracts and identification of studies that met inclusion-and exclusion criteria were done by eight authors (RT, MSe, GS, SH, DF, DV, PK, MSch). Studies meeting the inclusion criteria were subjected to full text review by two independent reviewers (RT and MSe); any disagreement was solved by a third arbiter (PK). Duplicates were excluded through automated deduplication by the librarian, initial screening and full text review. Data extraction Data extraction was again performed by two independent reviewers (RT and MSe), with a third as arbiter (PK). The following data were extracted from studies in the review: name of first author, time span (month/ year to month/year) and country of outbreak, setting (acute care, intensive care, long-term care, other), ward type (COVID-19 vs. non-COVID-19 ward), causing organism and mechanism of resistance, proof of clonality (resistance pattern, pulsed-field gel electrophoresis (PFGE), whole-genome sequencing (WGS)), number of affected patients (colonized or infected; clinical or screening samples), COVID-19 status of patients and patient outcome (i.e. hospital mortality). Furthermore, we assessed factors which potentially promoted and factors which might have helped in containing the outbreaks, all according to the respective study authors. Risk of bias Quality of reporting in included studies was evaluated independently by six authors (RT, MSe, GS, SH, DF, PK) according to an adapted version of the ORION statement [12]. The ORION statement consists of a 22-item checklist and a summary table. We selected 14 items which could be applied to all studies in our review (Additional file 1: Figure S1). Because no established cut-off exists to rate the quality of the studies based on the adapted ORION statement, risk of bias analysis was purely descriptive. Statistical analysis Due to the descriptive nature of the study, no quantitative analysis was conducted. Outbreak detection and patient characteristics The first CRAB was isolated in the medical ICU on September 30th 2020 from the blood of a COVID-19 patient repatriated from Northern Macedonia, 11 days after a negative admission screening. An outbreak investigation was started on October 11th, after a second patient tested positive for CRAB in a bronchial fluid sample taken during routine diagnostic bronchoscopy. Screening of 93 contact patients revealed seven positive cases. Until the end of December 2020, one additional patient was transferred from the Balkans to the surgical ICU with a positive admission screening for CRAB, resulting in a total of 10 cases (thereof 7 COVID-19 patients) ( Table 1). Eight patients were hospitalized on the medical, and two on the surgical ICU. CRAB was detected mainly in respiratory specimens (7/10); 5 patients were deemed to be infected with CRAB, whereof 3 died as a direct consequence of infection. Microbiological analyses Antimicrobial resistance testing showed that all 10 isolates were resistant to carbapenems (MICs imipenem and meropenem > 8 mg/l). Using single nucleotide polymorphism (SNP) analysis two distinct clusters could be identified (Fig. 1). The larger outbreak showed 4 isolates with identical cgMLSF pattern (i.e. 0 SNPs). On a longitudinal axis, the number of SNPs increased over a period of 65 days, with the highest SNP-count of 9 (compared to the presumed index) in the last isolate of that cluster. CRAB isolates could be attributed to two separate outbreaks, a larger cluster with eight patients and another cluster with two patients. All isolates were of cgMLST sequence type 2, with OXA-23 as carbapenem-resistance conferring enzyme. In both clusters, the first isolate was detected in patients transferred from the Balkans. Further outbreak investigations Sampling of bronchoscopes and of all other environmental samples (bronchoscope storage tubes, glove dispensers, ultrasound, computer keyboards, laryngoscope, sinks; n = 15) remained negative. HCW observations revealed inadequate handling of PPE, especially nonsterile gloves (e.g. no hand hygiene before gloving, gloving without indication, double gloving and disinfection of gloves).We performed four staff training sessions regarding correct use of PPE and hand hygiene on both ICUs. Environmental cleaning was intensified. Restriction of carbapenem use on the medical ICU led to a 91% reduction between September (28.3 DDD/bed) and December 2020 (2.5 DDD/bed). No further cases were detected after December 28th 2020 (Fig. 2). Outbreak investigations were stopped after three negative cross-sectional screenings did not reveal any more cases. Outbreak contributors and containment measures According to the study authors, the most commonly reported risk factors potentially contributing to the outbreaks were inadequate PPE or hand hygiene adherence (reported by 11 studies), PPE shortage (8 studies) and high antibiotic use (8 studies) (Fig. 4, Additional file 1: Table S2). Other commonly mentioned risk factors included environmental contamination (7 studies) and prolonged critical illness (7 studies). For outbreak control, studies most frequently reported enhanced environmental disinfection (n = 9) and implementation/reinforcement of PPE/hand hygiene audits (n = 9). Enhanced contact screening was reported from 8 studies. Contrasting potentially modifiable risk factors and containment measures, we identified the largest discrepancy for PPE shortage (reported by 8 studies as risk factor, addressed by 2 studies) and patient overcrowding (reported by 5 studies as risk factor, addressed by 0 studies). Quality of included studies Of the 14 items of the adapted ORION statement, studies reported a median of 11 items (range 6-13) (Additional file 1: Figure S1). Four studies reported only 8 items or less. Discussion We demonstrate the spread of CRAB in our ICUs during the COVID-19 pandemic within two unrelated clusters. Our systematic review shows that MDRO outbreaks have occurred worldwide during the pandemic and that C. auris and CRAB are most commonly implicated. We identified inadequate adherence to PPE and hand hygiene, PPE shortage, and high antibiotic use as potentially modifiable factors contributing to these outbreaks. Our institutional CRAB outbreak is notable for two reasons. First, we observed two independent CRAB clusters, which most likely originated from patients transferred from the Balkans. Indeed, ST2-CRAB are prevalent in the Balkans, with blaOXA-23 as the predominant gene [31]. Second, except for the index patients, no other affected patient was transferred from the Balkans, making subsequent nosocomial transmission likely. While repatriations from the Balkans are common in our hospital, CRABs have only sporadically been detected in recent years. We therefore think that the particular circumstances during the COVID-19 pandemic have substantially contributed to their spread, with inadequate PPE adherence and carbapenem overuse being the most important contributing factors in our setting. The systematic review revealed that CRAB-together with C. auris-are the most common pathogens causing outbreaks among COVID-19 patients. Both of these microorganisms have a high propensity to contaminate hospital environments, with the ability to survive and persist for a prolonged time on surfaces compared to other pathogens [32][33][34]. Indeed, environmental contamination was considered a major contributing factor in our review, and enhanced cleaning and disinfection was included in the majority of studies as containment measure. Also, Acinetobacter sp. and Candida have been identified as most common pathogens causing superinfections in COVID-19 patients according to a systematic review including 118 individual studies [35]. Here, patient factors such as prolonged length of stay, mechanical ventilation, use of broad spectrum antibiotics, and use of systemic steroids to treat COVID-19 might play a role. Of course, overrepresentation of C. auris and CRAB due to publication/reporting bias cannot be excluded. Also, our search strategy might have led to overrepresentation of these pathogens, although we tried to be as inclusive as possible in our literature search. All reported outbreaks involved the intensive care setting and prolonged critical illness was mentioned in several studies as contributing factor. In line with this finding, a massive increase of Carbapenem-resistant Enterobacteriaceae (CRE) in an Italian ICU during the pandemic has been attributed to the high intensity of care for these patients; patients with prone positioningwhich requires the help of four to five HCW-were significantly more likely to be colonized with CRE [6]. The critical disease state of many patients involved in these outbreaks is also reflected in the high overall mortality of 35%. However, in a meta-analysis assessing mortality of COVID-19 patients on ICUs (irrespective of bacterial superinfection), the authors found a mortality of even 41.6% [36]. According to our systematic review, inappropriate adherence to PPE and hand hygiene were the most commonly reported contributing factors. As seen in our hospital, the reinforced infection control measures put in place during the pandemic has paradoxically led in some cases to lapses in adherence to PPE and hand hygiene. Examples include failure to remove gloves after contact with patient surroundings and the use of alcohol-based sanitizer on gloved hands, which have been previously associated with increased cross-contamination [37,38]. Also, cohorting of patients in multi-patient COVID-19 room did not always allow for the minimal safety distance between patient's beds. According to our review, PPE shortage was reported by several studies from across the globe. Similarly, PPE shortage has been reported by over 50% of 2′700 respondents in an international survey among intensive care personnel. The shortages in medical PPE might in some cases have led to unsafe practices such as sharing of PPE between health-care personnel, which itself increases the risk for cross-contamination [39] A critically low HCW/patient ratio-reported as understaffing, shortage of trained HCW or patient overcrowding-was another important contributing factor in our review. A low HCW/patient ratio is a well-established risk factor for the transmission of MDRO, as shown for MRSA [40]. In addition, many hospitals had to compensate this shortage by recruiting new personnel which might have lacked the necessary competencies in infection control [6]. Of note, workload reduction-as a measure to improve the HCW/patient ratio-was only reported in one single study as part of outbreak control measures [16]. High antibiotic use in COVID-19 patients was considered a major outbreak contributor in seven studies. In fact, 75% of COVID-19 patients receive antibiotic treatment, while only 9% are actually experiencing bacterial super-infections [7]. At the same time, antibiotic stewardship (AMS) programs are currently being deprioritised in many hospitals [41], which carries the risk of increasing the selection pressure on microorganisms and facilitating the insurgence and cross-transmission of MDROs. In line with this hypothesis, the massive reduction of carbapenem consumption was one of the most important measures to control the CRAB outbreak in our hospital. Similarly, AMS was a part of successful outbreak control in three other studies. Another particular feature of the COVID-19 pandemic was the disruption of MDRO screening surveillance programs due to constraints of personnel and financial resources. In a survey performed by the Global Antimicrobial Resistance and Use Surveillance System, 64% of the 68 participating countries reported a decrease in the number of requested screening cultures during the pandemic [42]. It remains to be seen how this influences the spread of antimicrobial resistance in the long-term. The main limitation of our study is potential publication bias. Outbreaks with more successful outcomes and reports from settings, where involved HCW still had the resources to review and publish such outbreaks, might be overrepresented. Furthermore, some outbreak reports did not contain any information on risk factors and containment measures, which suggests reporting bias. Finally, causality cannot be inferred between reported risk factors and intervention measures. However, we still deem these aggregated experiences to be valuable for infection control specialists and also intensive care personnel for the prevention of future MDRO outbreaks during times of restricted resources. Conclusion This outbreak report and systematic review show that C. auris and CRAB are the most frequently identified pathogens associated with MDRO outbreaks during the COVID-19 pandemic. These data also suggest that many factors which have contributed to MDRO outbreaks during the COVID-19 pandemic are potentially modifiable. These mainly include adherence to PPE and hand hygiene, PPE shortage, and antibiotic use. We as health care personnel should not let our guards down and in any case discontinue established infection prevention and control practices, as these are still the best tools at our disposal to prevent the spread of MDROs in our hospitals.
2022-01-22T14:34:15.117Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "0862c36a2a169ea0abfa764d0f162ebd9da85f79", "oa_license": "CCBY", "oa_url": "https://aricjournal.biomedcentral.com/track/pdf/10.1186/s13756-022-01052-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1628553b84bc402157c0bc5b4111da85aaf853a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260900838
pes2o/s2orc
v3-fos-license
The dynamic response of human lungs due to underwater shock wave exposure Since the 19th century, underwater explosions have posed a significant threat to service members. While there have been attempts to establish injury criteria for the most vulnerable organs, namely the lungs, existing criteria are highly variable due to insufficient human data and the corresponding inability to understand the underlying injury mechanisms. This study presents an experimental characterization of isolated human lung dynamics during simulated exposure to underwater shock waves. We found that the large acoustic impedance at the surface of the lung severely attenuated transmission of the shock wave into the lungs. However, the shock wave initiated large bulk pressure-volume cycles that are distinct from the response of the solid organs under similar loading. These pressure-volume cycles are due to compression of the contained gas, which we modeled with the Rayleigh-Plesset equation. The extent of these lung dynamics was dependent on physical confinement, which in real underwater blast conditions is influenced by factors such as rib cage properties and donned equipment. Findings demonstrate a potential causal mechanism for implosion injuries, which has significant implications for the understanding of primary blast lung injury due to underwater blast exposures. Introduction Since the early days of naval warfare in the 19th century, underwater explosions have been responsible for serious injury or even death [1,2].Underwater explosive devices such as mines, torpedoes, and depth charges were increasingly common in the early 20th century [3].In World War II alone, over 1,500 casualties related to underwater blast were documented [4].While there have been fewer documented cases of injuries due to underwater blasts in recent decades [5], naval warfare has become one of the major emerging battlespaces of the future [6].As a result, injury or death to service members exposed to underwater blasts has the potential to be more prevalent in the future. While extensive work has been undertaken to investigate safe and lethal exposure levels under primary blast exposure in air [7][8][9][10], the translation of these limits to underwater blast injury risk is challenging due to fundamental differences in shock wave characteristics between water and air.Shock waves propagate farther and faster in water due to the higher density of water and the corresponding increased speed of sound.When these shock waves reflect off the water surface or ocean floor, they can produce either constructive or destructive interference [11].Furthermore, the underwater detonation can cause gas bubble cavitation, which generates additional shock waves [11].Due to the similar densities between humans and water, underwater shock waves are able to transmit more energy to the organs, posing a greater risk to humans [12,13].Conversely, the human body reflects more energy from shock waves in air [12]. Gas-containing organs are most vulnerable to underwater primary blast injury [14] due to the sudden decrease in material density and sound speed at the gas-tissue interface, which results in increased shock wave energy deposition [12].Trauma to the lungs is particularly lethal [15][16][17], as it can result in pulmonary hemorrhage and contusions, gas embolisms, and pneumothorax, among others conditions [16,18,19].Mechanisms of underwater blasts injuries are thought to closely resemble those suggested for air blasts [20], i.e., spallation, implosion, and inertia [16,21,22].However, these mechanisms are poorly understood due to the lack of experimental evidence. Historically, underwater blast injury studies have sought to establish exposure guidelines for the lungs [13,20].Data that inform these guidelines are based on air blast, unscaled animal models, computational models, medical case reports, clinical experience, or even, speculation [13,20].The broad range of methods has led to highly inconsistent guidelines, without a consensus exposure metric (e.g., peak pressure or impulse, or charge weight and range).Most importantly, these guidelines are not founded on well-characterized experimental data for humans, which is critical for the establishment of relevant and precise injury guidelines.Until a robust mapping between underwater explosions and human injury is established, military missions will continue to expose service members to underwater blast with an unknown risk of injury or death. To address the critical need for high-fidelity human lung data, a series of shock tube experiments were conducted where isolated human lungs were exposed to underwater shock waves in a water chamber.The pressure and volumetric response were measured with a combination of pressure sensors and high-speed video, and compared to equivalent measurements from solid organs, i.e., the liver and spleen.Finally, an analytical model based on the Rayleigh-Plesset equation was utilized to further explain the mechanisms that lead to the observed pressurevolume changes and to inform future injury risk metrics. Shock tube setup A shock tube, designed to simulate blast loading pressure profiles, generated short duration underwater overpressure waves to expose submerged organs (Fig 1).The shock tube was divided into four sections; the driver, driven, diffuser, and water chamber (Fig 1A).The driver and driven section are separated by a diaphragm that prevents the flow of pressurized helium from the driver to the driven section.The diaphragm ruptures once a threshold pressure is reached, which produces a shock wave as the pressure wave travels along the driven section.Pressures were measured by a PCB sensor (PCB113B21, PCB Piezotronics, Depew NY).The diaphragm condition was calibrated to deliver repeatable rupture pressures.The pressure wave is then radially expanded from 0.15 m to 0.43 m in diameter by an air-filled conical diffuser as it travels towards the water filled chamber.The diffuser and water-filled chamber are separated by a rubber diaphragm that ensures that water does not enter into the driven section, but still allows for transmission of the shock wave into the water chamber.Tests were run at two burst pressures of 350 kPa and 700 kPa for the liver and the spleen, and 350 kPa and 525 kPa for the lung.There was a discrepancy of the higher burst pressure between the liver and spleen, and lung due to manufacturing and storage differences of the diaphragm materials.Three repeat tests were conducted run at each burst pressure for each specimen. The burst pressure was measured by a Kulite pressure sensor (HKS-37, Kulite Semiconductors, Leonia, NJ), installed through the wall of the driver section.The dynamics of shock wave pressure as it propagates along the driven section was measured by an additional five Kulite pressure sensors placed at predetermined intervals.The pressure in the water chamber was measured by five piezoelectric PCB pressure sensors (PCB113B21, PCB Piezotronics, Depew NY), installed through the wall of the water chamber.All pressure data was collected at 1 MHz using a 16-bit high speed data acquisition system (DEWE 801; Dewetron, Wakefield, RI).Pressure values are relative to atmospheric pressure.Lateral images of the dynamic events during Experimental setup and organ preparation method for applying overpressure to organs within a water-filled chamber.(A) Schematic illustrating the key components of the shock tube, including pressure sensors (red downward-point triangles) for measuring the pressure evolution of the shock wave in the driver and driven section, and pressure sensors (magenta downward-pointing triangles) for measuring the overpressure applied to the organ in the water chamber.Lateral images shows the lateral placement (green triangle) and outline (yellow dotted line) of the (B) liver and spleen, and (C) lung within the water chamber.Scale bar, 0.1 m.To prepare the organs for testing, the (D) liver and (E) spleen are encapsulated in a gelatin puck and instrumented with six pressure sensors (blue circle) inserted into the parenchyma and two reference pressure sensors inserted into the gelatin.(F) The right and left lungs were encapsulated in a polyethylene bag installed with a lung insufflation port, a suction port for removing air leakage from the lung, and two sensor ports that pass-through eight pressure sensors (blue circle), with six located under the visceral pleura, one inserted into the main bronchus, and one positioned in the water chamber as a reference measurement.https://doi.org/10.1371/journal.pone.0303325.g001shock wave propagation in the water chamber were recorded by a high-speed camera (v711, Phantom, Wayne, NJ) at 2,000 fps.The last test series for the lung had an additional rear facing high-speed camera, which was used to compute the dynamic volume change due to the shock wave.This test series was of right lung specimen tested at both 350 kPa and 525 kPa burst pressures. Specimen preparation Tests were performed on the liver (N = 4), spleen (N = 4), left lung (N = 2), and right lung (N = 2) of 6 human cadaver specimens with ages ranging from 61 to 78 years, which were obtained through Science Care (Phoenix, AZ).Written informed consent was obtained from the donor or next of kin.All donors were screened to avoid any medical issues that would affect the mechanical properties of the tissue, e.g., cancer and chronic obstructive pulmonary disease.Specimens were fresh-frozen at -20˚C and stored until testing, which occurred between December 16, 2014 through October 9, 2015.Prior to preparation, specimens were thawed for at least 8 hours at 4˚C.The solid organs, the liver and spleen, were encapsulated in a gelatin puck (Fig 1D and 1E).Gelatin was chosen to correspond with the shock impedance properties of soft tissue [23][24][25] and water [26].A 10% w/v gelatin puck was created from 250 Bloom A Gelatin powder (Knox, Sioux City, IA) according to the protocol outlined by Fackler and Malinowski [27].Gelatin powder was rigorously mixed into cold water at 7-10˚C and subsequently heated until the gelatin was completely dissolved.The gelatin solution, totaling 12 L, was then poured into a cylindrical mold with the same diameter as the water chamber and allowed to cure for at least 8 hours at 4˚C.Subsequently, the organ was placed into a cavity in the shape of the organ, which was cut from the surface of the gelatin.An additional 12 L of gelatin solution was poured into the mold to encapsulate the rest of the organ.The gelatin supported the organs to approximate physiological geometry for the duration of the experiment.The final dimensions of the gelatin puck were 0.41 m in depth and 0.43 m in diameter.To measure the dynamic pressure due to the pressure wave in the water chamber, six fiber optic pressure (FOP) sensors (FOP-M-PK, FISO, Quebec, QC, Canada) were inserted into the parenchyma of the liver and spleen through a hollow insertion tube.Two additional FOP sensors were placed into the gelatin to serve as references for computing the incident pressure.The location of the FOP sensors for the liver and spleen are shown in Fig 1D and 1E.A similar procedure was initially repeated for the lungs.However, air leakage during potting and testing compromised the mechanical integrity of the gelatin.Additionally, it was not possible to confirm the insufflation of the lungs during testing since the cured gelatin is opaque. To overcome these issues with air leakage, a novel encapsulation method for underwater testing of the lungs was developed.The lung was inserted into a polyethylene bag with four liquid-proof ports installed.Two ports served for insufflation and vacuum, and two ports served to insert sensors (Fig 1F).The lung was insufflated during testing by a pump that delivered air at pressures ranging from 5-10 kPa through a vinyl tube that passed through the insufflation port and connected to a barbed polyethylene fitting sutured to the bronchus.These insufflation pressure ranges were based on typical mechanical ventilation pressures [28].The vacuum port was attached to a pump with a vinyl tube and evacuated the air leaking out of the lungs during testing.To measure the dynamic pressure response of the lung, seven FOP sensors were inserted under the visceral pleura, and one FOP sensor was inserted into the main bronchus. An additional reference pressure sensor was placed next to the lung in the water chamber.This encapsulation method provided three key advantages: 1) precise control of lung insufflation, 2) removal of air leaking from the lung to the water chamber, and 3) full visibility of the lung, allowing the capture of high-speed video. Prior to testing, organs were placed into the chamber with no water and positioned along the radial center of the chamber and approximately 0.25-0.30m from the end of the diffuser.To position the lungs for testing, a thin plastic net was anchored to the chamber walls.The sensor cables and tubing for the lung were passed through water-tight ports at the top of the water chamber and the chamber was subsequently filled with water until no air was present.Photos of the nominal pretest position of the liver, spleen, and lungs are shown in Fig 1B and 1C. Data processing and analysis Pressure measurements.Data was processed and analyzed using MATLAB 2022b (Mathworks, Natick, MA).All pressure measurements except those made by the reference pressure sensor next to the organ were filtered with a zero-phase 4-pole Butterworth low-pass filter with a cutoff frequency of 50 kHz.Peak organ pressure was determined by computing the local maxima within 30 ms of the trigger and subsequently verified through visual inspection of the data trace.The dominant frequency of the pressure response of the organ was computed by averaging Welch's power spectral density [29] across all pressure sensors and subsequently selecting the frequency with the highest power. To quantify the pressure dose to the organ, incident pressure was computed by subtracting the reference pressure measurement from the chamber pressure measurement closest to the diaphragm.Prior to subtraction, the reference pressure measurement was low-pass filtered with a zero-phase 4-pole Butterworth with a cutoff frequency of 1 kHz to preserve the transient response of the incident pressure wave.This method of computing the incident pressure was chosen in order to overcome limitations associated with pressure measurements collected in an enclosed rigid chamber, where the pressure response of the organ alters the measured chamber pressure due to the relative incompressibility of the water.The subtraction procedure isolates the transient pressure dose from the organ pressure response.S1 Fig shows an example of pressure waveforms of the incident pressure computed using this method. A two-sided hypothesis test was performed to examine the linear association between the peak pressure and incident pressure, with a significance level of p < 0.05 based on the computed t-statistic of the slope term. A Shapiro-Wilk test was performed to assess the normality of both the peak pressure range and the dominant frequency.To identify significant pairwise differences across organs, a Kruskal-Wallis test followed by a post-hoc Dunn-Sidak test was conducted, with a significance level set at p < 0.05. Lung volume measurements.The volume of the lung (V) for the last test series was computed from the high-speed video by approximating the lung for a single test as an ellipsoid.The equation for the lung volume and corresponding volumetric strain ε V relative to the initial volume V 0 is given by and respectively, where A is the cross-sectional area of the lung computed by manually tracing the lung boundary from the lateral high-speed video and b is the distance corresponding to the minor axis of an ellipse that was manually fitted to the lung boundary from the rear highspeed video.Manual tracing was repeated every 3 ms for a total of 99 ms post diaphragm burst.The other test series for the lungs were not included in the volumetric analysis since they did not include high speed video of the chamber from the rear view.The volumetric strain rate was computed with forward finite difference.Analytical modeling of the lung dynamics.A modified version of the Rayleigh-Plesset (RP) equation [30,31] was applied to understand the first-order dynamic response of the lungs due to a transient pressure wave [32,33].For this model, the lungs are assumed as a spherical gas bubble suspended in a spherical domain of incompressible liquid, which is confined by an elastic spherical shell.The choice of the gas bubble confinement was made to get a preliminary understanding of the confinement effects of the rib cage in humans.The following assumptions are made: (1) the shell inertia are negligible; (2) the gas bubble behavior follows a polytropic process and its pressure is uniform; (3) there is zero mass transport across the bubble interface; and (4) the dynamic viscosity and surface tension effects are negligible due the large dimensions of the bubble (i.e., > 10 −1 m) [34].The equation of motion for the bubble is given by where ρ is the liquid density, p is the pressure inside the bubble, p 1 is the pressure of the liquid, R is the bubble radius with time derivatives _ R and € R, and λ is a dimensionless parameter defined as the ratio of R to the radius of the spherical shell R S (i.e., λ = R/R S ).The last three terms that contain λ are the modification to the classic RP equation, which can be obtained by setting R S = 1.Due to the liquid incompressibility, R S is related to R via volume conservation by where R S 0 and R 0 are the initial shell radius and bubble radius, respectively.While other versions of the RP equation [35,36] that can better generalize to other boundary conditions, this version of the confined RP was chosen due to the inclusion of key parameters without undue complexity.Other models of the lungs that account for structure, material properties, and geometry [36][37][38][39] were considered, but complexity beyond the needs of this study placed these models out of scope.Eq (3) was numerically solved with MATLAB "ode45" for p and R as a function of time, t, given an incident pressure p 1 , defined as an instantaneous pressure increase with amplitude p A from initial pressure p 0 with duration τ, i.e., The volumetric strain was computed with Eq (2), but assuming the volume of a sphere, i.e., V = 4/3πR 3 . The parameters for the model were chosen to best match the experimental data.The initial gas bubble radius R 0 was set to 0.092 m based on the average effective radius of lung prior to shock wave arrival as computed from the volume estimated with Eq (1).The effective polytropic index for the lung was set to 1.0 based on Wodicka et al. [40].The liquid density ρ was set to that of water, i.e., 997 kg/m 3 .Similar to the experimental data, all pressures are relative to atmospheric pressure.Simulations with volumetric strains of less than -95% were not included in the results since this would indicate total bubble collapse, which would lead to very high and unrealistic bubble pressures. Organ pressure response waveform A representative pressure response measured by sensors embedded throughout the organ in various locations, along with the corresponding incident pressure waveform is shown in Fig 2 .The average peak incident pressure was 68 kPa, 86 kPa, and 113 kPa resulting in a peak organ pressure of 88 kPa, 106 kPa, and 119 kPa for the lung, liver, and spleen, respectively.The pressure response of the lung shows large regional differences in the pressure magnitude (Fig 2A).In contrast, the pressure response of the liver and spleen (Fig 2B and 2C) are tightly grouped, indicating minimal regional differences in pressure magnitude.For all of the organs, the intraorgan pressure responses were in phase, indicating that there was sufficient spatial sampling of the pressure to characterize the bulk pressure response of the organ.The oscillatory behavior was markedly higher for the liver and spleen compared to the lung.The morphology of lung pressure was markedly different, in which the positive pressure peaks were shorter and greater in magnitude compared to the longer negative pressure troughs.The insets in Fig 2 provide a more detailed version of the organ pressure and incident pressure waveforms, and reveals that the liver and spleen exhibit a considerably fast, approximately 2 ms, pressure rise time, compared to the lung with a rise time of approximately 10 ms.Additionally, it is evident from the insets that the incident waveform is characterized by a sub-millisecond rise time and pressure oscillations.These high-frequency pressure oscillations may arise from the pressure wave reflecting off of the chamber walls, which should have an lower bound of 1 kHz based on the time it would take a pressure wave to travel between the front and back of the chamber. Features of the organ pressure response The relationship between peak incident pressure, and the intra-organ mean and maximum peak organ pressure is shown in Fig 3A and 3B.The mean peak lung pressures due to incident peak pressures of 53 kPa-108 kPa ranged between 55 kPa-147 kPa, but was not significantly correlated to peak incident pressure (R 2 = 0.16, p = 0.06).Conversely, the liver and spleen exhibited a wider range of mean peak organ pressures from 79 kPa to 190 kPa, due to greater burst pressures, which produced a greater down-stream incident pressures from 46 kPa to 177 kPa compared to the lung.A significant positive correlation was observed between peak incident pressure and mean peak organ pressure for the liver (R 2 = 0.81, p < 0.001) and spleen (R 2 = 0.75, p < 0.001).Maximum peak lung pressures were considerably higher than mean peak lung pressures, and ranged from 68 kPa to 394 kPa, but no significant correlations were observed (R 2 = 0.16, p = 0.06).The maximum peak pressure of the liver and the spleen were similar to mean peak pressure and ranged from 83 kPa to 252 kPa.Similar trends across organs were observed with incident impulse, which ranged from 28 to 196 N�ms, likely due to correlation between the peak incident pressure and the associated impulse [20].Maximum peak organ pressure significantly correlated with the peak incident pressure for the liver (R 2 = 0.80, p < 0.001) and spleen (R 2 = 0.75, p < 0.001).These differences between mean and maximum peak organ pressure were also be observed by computing the regional range of the peak organ pressure on a per test basis (Fig 3C).The lung exhibited a significantly higher peak organ pressure range (median = 103 kPa) than either the liver (median = 19 kPa, p< 0.001) or the spleen (median = 22 kPa, p < 0.001) (Fig 3C).No significant differences were observed between pressure ranges for the liver and spleen (p = 0.82).The lung exhibited a significantly lower dominant frequency response (median = 28 Hz) compared to the liver (median = 176 Hz, p < 0.001) and the spleen (median = 198 Hz, p < 0.001) (Fig 3D).The dominant frequency between the liver and the spleen were significantly different (p < 0.001). Volumetric response of the lung The lung underwent large volumetric strains and strain rates due to the pressure wave in the water-filled chamber compared to the liver and spleen ( Analytical model of the lung pressure-volume response A confined Rayleigh-Plesset (RP) equation was solved to understand the driving force behind the pressure-volume response of the lung due to a transient pressure pulse.For this model, the lungs are assumed to be a spherical gas bubble with initial radius R 0 suspended in a spherical domain of incompressible liquid confined by a spherical shell with radius R S .Fig 5 shows solutions for the bubble pressure (p) and volumetric strain ε V for a range of different incident pressures amplitudes (p A ) and durations (τ), and for different bubble confinement (denoted as the ratio of R S to R 0 ).The waveform morphology of bubble pressure exhibited shorter duration positive pressure peaks with larger magnitudes compared to the longer negative pressure troughs, which were more pronounced with higher incident pressures (Fig 5A).The corresponding volumetric strain of the bubble was inversely related to the bubble pressure due to the gas behavior following a polytropic process.The maximum bubble pressure and volumetric strain scaled nonlinearly with both the incident pressure amplitude and duration (Fig 5B and 5C) and impulse (S2 Fig) .Across the simulated pressure durations (τ), p was less pA.However, for τ = 10 ms, simulations with p A above approximately 130 kPa, yielded volumetric strains greater then 95%, leading to near bubble collapse and very high bubble pressures.As a result, these data were not included in (Fig 5B and 5C).As the bubble becomes more confined (i.e., R S /R 0 !1.1), the maximum bubble pressures and volumetric strains increased by 3. Discussion While there been many attempts to establish injury guidelines for the human lungs exposed to underwater blast [13,20], the criteria remain highly variable due to a lack of sufficient human data to reveal the underlying injury mechanisms.To address this gap, a series of shock tube experiments that subjected isolated lungs to shock waves in a water chamber were conducted.Experiments were repeated with the liver and the spleen to compare lung response to those of solid organs.Lastly, this study utilized an analytical model based on the Rayleigh-Plesset (RP) equation to isolate the effect of air on lung response and to understand the mechanisms of lung deformation. Upon analyzing the pressure measurements (Fig 2A ), transient spikes in lung pressure were not observed with a duration similar to the incident pressure waveform, suggesting that shock wave front propagation through the lungs is severely attenuated.This attenuation is likely due to the unique structure of the lung, which is composed of many microscopic air sacs.Each air sac acts as a high acoustic impedance solid-gas interface that diffracts and reflects the shock wave front.At a macro-scale, these events superimpose to severely and quickly dissipate the energy of the shock wave front.This proposed dissipation mechanism is similar to the wellcharacterized shock wave attenuation mechanisms in foams [41,42].Despite substantial shock wave attenuation, the lungs still underwent large pressure cycles characterized by larger magnitudes with shorter positive pressure peaks, and smaller magnitudes with longer negative pressure troughs ( organs exhibited a much faster oscillatory response between 175 Hz and 200 Hz, which may correspond to the resonant frequency of these organs.Unlike the solid organs, the peak pressures associated with these cycles exhibited large test-to-test variations that did not correlate with peak incident pressures (Fig 3A and 3B).In some tests, the measured peak pressure greatly exceeded peak incident pressure.This finding provides further evidence that the pressure response is not dominated by the shock wave front.[43].To gain insights into this interesting PV behavior, we solved a RP equation where a spherical gas bubble within a domain of incompressible liquid subject to a short-duration pressure square wave [30,31].Fetherston et al. solved a similar equation to understand the dynamics of marine mammal lungs when exposed to underwater blast [36].Although this RP model oversimplifies the complexities of lung composition, material properties, and structure, the PV time series (Fig 5A) exhibits waveform morphologies that are remarkably similar to those of the lung (Figs 2A and 4B).These morphological similarities provide evidence that the bulk PV response of the lung is due to the compression of the contained gas, which is initiated by the shock wave.One possible mechanism for how the shock wave initiates lung compression is that the external water-tissue interface has a small acoustic impedance mismatch, so the reflection from the water-tissue interface is small, allowing more energy to be transmitted into the body.However, at the interface between the pleural cavity and the lung, the acoustic impedance mismatch is large, leading to substantial energy deposition at the lung surface, which then initiates a bulk PV response.This proposed mechanism of lung compression in underwater blast exposure is substantially different from the mechanism of lung compression in air blast exposure as modeled by Stuhmiller [44], due to the difference in the surrounding fluid.In Stuhmiller's analysis, the air-to-tissue interface reflects the blast wave, resulting in momentum transfer to the outer tissues of the chest and abdomen.Resulting motion of the chest wall and diaphragm are then used to develop a model for lung compression.Another possible mechanism for how the shock wave initiates lung compression can be observed in studies involving foams, where heavily attenuated shock waves convert to high-pressure compression waves causing foam compaction [45].For both initiation mechanisms, we expect that these PV cycles are also present when the lung is exposed to air blast, but with smaller amplitudes due to weaker acoustic coupling between the torso and the air compared to coupling with water [12,13], and higher frequencies due to air having less inertia than the surrounding water. Confinement on the lungs by the rib cage plays a critical role in PV response.To understand these effects, a solution to the modified version of the RP equation that accounts for confinement was solved by enclosing the gas bubble and surrounding liquid with an elastic shell [32,33].By accounting for confinement, bubble pressures substantially increased by up to approximately 8 times when the bubble was in a shell that is 10% larger than its original radius (Fig 5D).Although we expect the corresponding volumetric strain in real scenarios to decrease with confinement in humans, our model shows the opposite (Fig 5E).This discrepancy is attributed to the treatment of the elastic shell in Eq (4) as variations of bubble volume were accommodated by modifying the shell radius.From an injury perspective, a decreased volumetric strain is desirable.Yet, this accommodation comes at the cost of inducing higher alveolar pressures, which could lead to increased forced air emboli into the capillary [46].This confinement could also lead to local tissue shearing when the soft lungs expand and impinge on the stiffer rib cage, potentially causing the lung tissue to deform into the intercostal space.This mechanism of injury is consistent with clinical observations of rib markings on the lungs following air blast injury [47].However, in air blast scenarios, the transfer of momentum to the chest wall due to the high acoustic impedance produces rib motion that would compress the lungs, whereas in the underwater blast scenarios, the lungs would expand into the rib cage.The effects of lung confinement are likely to vary based on the individuals rib cage stiffness and geometry, as well as donned personal protective equipment, or occupation specific equipment, which may further restrict the lungs. These findings have significant implications for our understanding of the injury mechanisms for lungs and other gas-containing organs exposed to underwater blast.While the the mechanisms of lung injury in underwater blasts are thought to closely follow, albeit be more damaging, those of air blasts [20], i.e., spallation, implosion, and inertia [21], the extent of damage caused by these mechanisms remains unknown despite numerous studies on air blast injuries [16].Among these mechanisms, implosion forces are the most consistent with the observed lung response in this study, resulting in rapid compression and expansion of gaseous content.At the alveolar length scale, compression can cause the alveolus to collapse and result in atelectasis [48], while pneumothorax can occur at the length scale of the lung [18,19].Rapid lung expansion can cause alveolar and capillary overstretching and rupture, or the driving of extravascular fluid into the alveolar space, causing pulmonary oedema and hemorrhage [16].These injuries may not present uniformly throughout the lung based on regional pressure differences (Figs 2 and 3C) that are due to the heterogeneous structure of the lung.Previously, the implosion mechanism was first postulated by Forbes in 1812 [49], later described by Schardin in 1950 [21], and conceptually modeled by Ho in 2002 [46].Yet, to the best of our knowledge, this study is the first to present experimental evidence of this mechanism, and with direct visualization of lung volume over the course of events. Peak incident pressures and associated impulses measured in this study fall within the reported range of previous studies [13,20].However, it is difficult to determine the severity of injury that would be obtained in this study with any granularity based on the large spread in the injury criteria [13,20].This variability in reported data is likely due to the variety of approaches that have been used to develop these criteria, each with its own significant limitations [13].The peak incident pressures and impulses measured in this study are most likely above safe levels based on an animal study conducted by Richmond et al. [50,51], but below 50% lethality based on a study by Lance et al. [20] that combined field injury data with computational predictions of incident pressures and impulses.It is important to note that the injury criteria developed in these studies are based on incident pressure and not the lung pressure, which can reach up to approximately six times the peak incident pressure (Fig 3B).These internal pressures should be an important factor in the development of future injury criteria, as they are a more accurate representation of tissue level loading that directly leads to injury. Conclusion This study provides the first directly observable experimental data and characterization of human lung dynamics when exposed to underwater blast.We found that the shock wave front was severely attenuated by the high acoustic impedance gas-solid microstructure of the lung, similar to gas-filled foams [41,42].However, the shock wave front initiated large bulk PV cycles that are distinct from the solid organs.By solving the RP equation, we show that these large PV cycles are due to the compression of contained gas, which follows a classic thermodynamic process [43].By further modifying the RP equation to include physical confinement, we find that the PV cycles are also highly depending on physical confinement, which is dependent on the rib cage properties and may be modified by donned equipment.These findings have significant implications for our understanding of the proposed injury mechanisms both for underwater and air blast exposures, in that they provides the first direct evidence of the implosion injury mechanism, which has was first proposed in 1817 by Forbes [49] and has been expanded on over the course of over two centuries [21,46]. A number of future studies are needed to fully characterize lung dynamics during blast and their role in injury.In this study, isolated lungs were placed in a chamber that is not fully representative of human blast exposure in an open body of water.The experimental setup involving an organ confined within a chamber presents two major limitations.Firstly, shock wave reflections occur at the chamber walls, resulting in these reflected waves impinging on the organs.Secondly, the PV changes induced in the organs cause measured pressure changes in the chamber, making it challenging to accurately quantify the true incident pressure dose without further data processing.These limitations would be overcome by running tests in substantially larger volume of water that would also recreate open water phenomena, e.g., shockwave reflection from the ocean floor or rarefaction from the surface [13,20].Additionally, these shock waves should be generated with underwater explosives to better represent real world exposure to blast, and should cover a larger range of incident pressures to form a basis of comparison with previous injury criteria [20]. Future studies should also characterize the dynamics of the lungs with a combination of experimental models.These studies should include additional cadaveric experiments to ensure a more accurate geometry and structure (e.g., the effects of the rib cage), and animal experiments to better characterize injury in vivo.To fully understand the injury mechanisms on the alveolar length scale, more detailed in vitro and in vivo models are needed in conjunction with higher resolution imaging techniques [52,53] to overcome the issues with limited pressure sensing resolution. Higher fidelity computational models of the lungs exposed to underwater blast are critical to understanding the injury mechanisms and designing protective measures.Our study involved the use of the RP equation to create a analytical model of the lung, which is intended to be a first-order approximation for the lung.This model oversimplified the true composition, material properties, and structure of real lungs, resulting in PV responses that were different from the test data.Peak pressures and volumetric strains, as well as their rates of decay, are different than the test data.Specifically, by the third PV cycle, the maximum PV of that cycle has decreased by over 50% (Fig 4A).We believe that these discrepancies are based on the need to explicitly include sources of energy loss.For example, this model does not account for dynamic viscosity of the liquid and bubble surface tension [31], but we believe that these factors are negligible due to the larger dimensions of the bubble [34].Additionally, the model does not account for the viscoelastic nature of the lung [54,55] due to the tissue, which would produce lower volumetric strains compared to the RP model since the stiffness of the tissue would resist volumetric strain.This tissue resistance would also affect the subsequent decay of the PV response.Future studies should build on the history of high fidelity finite element models used for blast [39,44,[56][57][58][59][60] to better understand the unique PV response.However, these models must be validated against high-fidelity human data collected in underwater blast scenarios similar to those presented in this study. Fig 1 . Fig 1.Experimental setup and organ preparation method for applying overpressure to organs within a water-filled chamber.(A) Schematic illustrating the key components of the shock tube, including pressure sensors (red downward-point triangles) for measuring the pressure evolution of the shock wave in the driver and driven section, and pressure sensors (magenta downward-pointing triangles) for measuring the overpressure applied to the organ in the water chamber.Lateral images shows the lateral placement (green triangle) and outline (yellow dotted line) of the (B) liver and spleen, and (C) lung within the water chamber.Scale bar, 0.1 m.To prepare the organs for testing, the (D) liver and (E) spleen are encapsulated in a gelatin puck and instrumented with six pressure sensors (blue circle) inserted into the parenchyma and two reference pressure sensors inserted into the gelatin.(F) The right and left lungs were encapsulated in a polyethylene bag installed with a lung insufflation port, a suction port for removing air leakage from the lung, and two sensor ports that pass-through eight pressure sensors (blue circle), with six located under the visceral pleura, one inserted into the main bronchus, and one positioned in the water chamber as a reference measurement. Fig 2 . Fig 2. Representative organ pressure response waveforms.(A) lung, (B) liver, and (C) spleen pressure response due to incident pressure (black line), The light blue line shows the individual organ sensor measurements, while the blue line represents the temporally averaged response.Inset shows the first 5 ms of the organ pressure responses and incident pressure.https://doi.org/10.1371/journal.pone.0303325.g002 Fig 4 ) . The minimum and maximum volumetric strain for the specimen shown in Fig 2A was -24.0% and 15.6%, respectively (Fig 4A).The maximum and minimum volumetric strain rate was 43.3 s −1 and -41.7 s −1 , respectively (Fig 4B).The volumetric strain oscillations occurred at the same dominant frequency as the pressure oscillations shown in Fig 2A.The corresponding lateral high-speed images of the lung in the undeformed, most compressed, and most expanded state of the lung are shown in Fig 4C and 4D.In the most compressed state, the lung surface deformed non-uniformly.S1 Video shows the temporal evolution of the volumetric response at 0.2 ms intervals. Fig 4 . Fig 4. Volumetric deformation of the lung due to shockwave exposure.(A) Volumetric strain and (B) strain rate time series of the lung with six tests (gray) and corresponding mean (black).Outlines of the lung are shown in the undeformed at 0 ms (red), most compressed at 10 ms (yellow), and most expanded at 25 ms (blue) state.(C) Lateral image in the undeformed state with an overlay of the lung outlines shown in (A).(D) Enlarged images of the lung shown in (C).Scale bar, 0.1 m. https://doi.org/10.1371/journal.pone.0303325.g004 S1 Video.Lateral high-speed video of the lung undergoing volumetric deformation due to incident pressure.(MOV) S1 File.MATLAB code for generating presented data in manuscript.(M) The study was approved by the U.S. Army Medical Research and Material Command (USAMRMC), Human Research Protections Office for compliance with the USAMRMC Policy for Ethical Use of Human Cadavers in USAMRMC Research and U.S. Army Policy for Use of Human Cadavers for Research, Development, Test, and Evaluation, Education, and Training.
2023-08-16T13:09:12.153Z
2023-08-12T00:00:00.000
{ "year": 2024, "sha1": "5b175b7fc4abbdb87bb880bbc4d3b4fc07eba783", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0303325&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c427fa1e86125687f1f62c65baf6707534dfcef", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Biology" ] }
236855716
pes2o/s2orc
v3-fos-license
Pancreatoblastoma. Case report and review of literature Although pancreatoblastoma (PB) is a rare tumor, it is the most common malignant pancreatic tumor in children. Clin-ic presentation is insidious, so early diagnostic suspicion al-lows timely therapy. We reported 3 cases of PB treated at our center. The first two cases achieved complete disease response after full tumor resection. The first one is in complete remission at 7 months after chemotherapy. The second patient is in second complete remission at 206 months after diagnosis and 128 months after metastatic relapse. The third case died from disease progression 61 months after the initial metastatic unresectable tumor. Histology, clinical features and treatment options are discussed along with presentation of the cases. Introducción Pancreatoblastoma (PB) is a rare exocrine pancreatic tumor of childhood, which can sporadically affect adults. Although it comprises 0.5% of pancreatic tumors, 1 is the most common pancreatic malignant tumor of childhood. 2, 3 Essential for accurate histological diagnosis are acinar differentiation and the presence of squamoid nests. 4 Clinical onset is mainly insidious or asymptomatic, driving a frequent advanced initial staging (III, IV according to TNM). 5 Most of times is associated with elevation of Alpha-fetoprotein (AFP). 6,7 Surgery may be curative when complete resection is possible, but non resectable or metastatic disease has a dismal outcome. 2,8 Less than 200 cases have been reported in the literature. 1 The knowledge about this rare disease is important due to the relevance of an early diagnosis. The objective of this paper is to report three cases treated over 30 years in the care center "Hospital de Pediatría S.A.M.I.C. Prof. Dr. Juan P. Garrahan" and a review of the literature. Case 1 A palpable epigastric mass was detected in a fiveyear old male patient during health control. The child complained of abdominal pain without any other accompanying symptom. Ultrasound study showed a pancreatic tumor. Initial CT-Scan disclosed a tumor of lobed contours at the pancreatic body, with heterogeneous density, hypo-dense areas and calcifications that contacted with the liver without clear-cut limits and volume of 145 cm 3 . Superior mesenteric vein showed tumor invasion. (Figure 1); A-FP serum level was 239 ng/ml (nor-mal range 8.5 ± 5.5 ng/ml). There was no evidence of disease in thoracic and brain CT-scan. Percutaneous trucut biopsy was made, and the pathology report informed pancreatoblastoma, stage IIA according to TNM. Due to the inability to assure a surgical complete resection, neo-adjuvant chemotherapy with cisplatin/doxorubicin was delivered, following SIOPEL 3 guidelines (Liver Tumor Strategy Group of the International Society of Pediatric Oncology). Post third cycle evaluation showed AFP 15ng/ml and CT-scan with very good partial response: tumor volume of 9.7 cm 3 , superior mesenteric invasion and splenic veins compression ( Figure 2). After 4th neo-adjuvant chemotherapy cycle, surgical resection was performed with free microscopic margins. Therapy was completed after two cycles of adjuvant chemotherapy without evidence of residual disease. A transitory elevation of AFP was seen after surgery. Currently free of disease +10 months' post-diagnosis. Case 2 A six-year-old female patient complained of pain and abdominal distension during the last 3 months. Abdominal ultrasound showed a pancreatic well-defined heterogeneous mass with multiple necrotic areas of 115 x 74 mm. However, a precise localization of the tumor was not possible based on images. Initial AFP serum levels of 615 ng/ml. Thorax and Brain CT-Scan without noteworthy findings. A distal pancreatectomy was performed and pathology analysis reported a Pancreatoblastoma, stage IIA ( Figure 3). Post-surgery serum level of AFP was 73.8 ng/ml and AFP returned to normal after six months. Six years later AFP level increased to 210 ng/ml and CT-scan showed local and metastatic recurrence at liver segment VIII. New tumor and metastasis excision was performed, with free microscopically margins. Post-surgical AFP level was 7.2 ng/ml. Adjuvant chemotherapy was started delivering 5 cycles of carboplatin/doxorubicin + cisplatin (SIOPEL 2 guidelines). A Second Complete Remission was achieved with normal levels of AFP. Currently the patient is free of disease +206 months (17 years old) post-diagnosis. Figure 2. Abdomen and pelvis CT scan after third neo-adjuvant chemotherapy cycle. (A) Axial plane and (B) coronal plane highlight heterogeneous formation with hypo-dense central area and calcifications that partially invades superior mesenteric vein. (C) Coronal plane in arterial time shows resolution of image in segment III of liver. Case 3 11-year-old male patient consulted for asthenia and weight loss during the previous month, palpable tumor mass and peripheral lymph nodes. Initial CT-scan showed at liver segment VI a hypo dense solid image of 30mm in the greatest diameter, at caudate lobe image of 40mm in the greatest diameter, and at pancreas head a heterogeneous formation of 44 x 40mm with hypo dense areas with contrast enhancement, associated with retroperitoneal lymphadenopathies. Brain and thoracic CTscan and bone technetium-scan without findings. AFP serum level was < 3 ng/ml. A biopsy of a left supraclavicular lymphadenopathy reported metastatic pancreatoblastoma. Systemic chemotherapy was started with cisplatin-doxorubicin (SIOPEL 3 guidelines). Six cycles were delivered with partial response by CT-Scan: residual image at pancreas head of 5.5 cm 3 remained. Residual tumor excision was performed, but during surgery hepatic metastases and retroperitoneal lymph nodes where observed and confirmed by biopsy. Second line treatment was started with carboplatin/etoposide for 6 cycles achieving second remission. One month after the end of treatment, a new lesion of 1.8 cm 3 was seen in the uncinated process of pancreas. Tumor excision was performed but biopsy showed lymphatic lumen and peri neural neoplastic involvement. There was no other evidence of macroscopic disease and a close follow-up ensued. Three years later the patient suffered a third recurrence with lesions in pancreas tail and bilateral multiple pulmonary metastases. The AFP level was 10 ng/ml. A third line of treatment was started by 3 cycles of cyclophosphamide/topotecan and irinotecan 2 cycles with progressive disease. The patient survived with palliative care 61 months post-diagnosis, and 8 months after last relapse. Discussion TPB is considered a rare tumor of childhood. According to the SEER (Surveillance, Epidemiology and End Results) the annual incidence of pancreatic tumors is 0.191 per million, in the population between 0-19 yearsold. 8 Argentine pediatric cancer Registry (ROHA) has registered 27 patients aged 0-15 years-old with pancreatic tumors from 2000 to 2017: 4 PB. Is commonly diagnosed in the first decade of life (average age of 2.4-5 years-old) 1,2,7,9 and is more frequent in males (1.3-2:1). 1,7 Abdominal pain and palpable mass are the main initial symptoms. They are large tumors (7-18 cm in diameter). Due to soft consistency rarely generates symptoms of duodenal obstruction. 10 75% of cases are associated with AFP serum levelsincreasing, 7 showing the embryonic origin as hepatoblastoma. Findings on imaging studies are suggestive: presence of a large well-defined mass, multilobulated, heterogeneous with necrosis or calcification and septa which enhance in CT-scan. The ultrasound demonstrates mixed echogenicity with hypo-echoic areas corresponding to necrosis. Vascular invasion makes them more aggressive. At onset one third of patients has distant metastases being liver the most common site, 9 then lung, bone, mediastinum, and lymph nodes. Although it was a small number of patients, our cases were somewhat older than usual, and clinical data and initial images were consistent with the literature. Figure 3. Macroscopic and microscopic pathology (case 2). Cystic-solid tumor, well-defined mass. Histologically solid pattern, tubulo-acinar with squamoid morula (circular mark). Hematoxylin-eosin coloration 100X. PB originates in persistent pluripotent embryonic cells and histologically resembles the acinar fetal pancreatic tissue incompletely differentiated at week 8 of gestation. 11 Histologically is characterized by an acinar component with squamous cell differentiation but can exhibit in two thirds endocrine and focal ductal differentiation. 4,12 The solid hyper cellular areas consist of nests of polygonal cells alternating with areas of acinar differentiation forming luminal spaces with small glandular luminal. The stroma can be from paucicellular to hypercellular. 13 The characteristic "squamoid corpuscles" vary in shape of flat epithelioid cells islets to frank keratinization. 12 Approximately 90% shows acinar differentiation stained by immunohistochemistry of pancreatic enzymes such as trypsin, chymotrypsin, Alpha-1-antitrypsin or lipase. The epithelial components can be positive for A-FP associated with serum elevations. In two thirds of cases there may be endocrine differentiation with positive staining for chromogranin, synaptophysin, or specific-neuro-nal enolase. 12 The nuclear or cytoplasmic Beta-catenin is especially observed in 80% of the squamous corpuscles and correlates with molecular alterations of the path of APC/beta-catenin in 40-60% of the sporadic PB13 and forms of congenital presentation (associated with Beckwith -Wiedemann syndrome and Familial Adenomatous Polyposis). 86% have alterations at chromosome 11p15.5. 4,13 These alterations are associated with other embryonic tumors such as hepatoblastoma, suggesting a common genetic origin. Pathologic findings of reported patients are shown in Table 1. The best treatment option is surgery and a complete resection may be curative. The invasion of portal vein or hepatic artery, metastatic disease and invasion of local vascular structures contraindicate initial surgical intervention. In these cases, neo-adjuvant chemotherapy is used. 2,8 Many useful agents were reported, but because of the similarity with hepatoblastoma PLADO scheme is recommended (cisplatin 80 mg/m 2 and doxorubicin 60 mg/m 2 ). The AFP can be used as a parameter of tumor response. 2,8 In patients without response to neo-adjuvant treatment, incomplete resection or local recurrence, radiation therapy can be used as a therapeutic option. 8,14 There is no standard treatment regimen for metastatic pancreatoblastoma. 14 The overall survival at 5-year is 63.7% (48-79.4%) in the present largest series 1, 7 and the main prognostic factor is achieving complete surgical resection. 1,7 Follow-up guidelines where not established yet. It is suggested periodic physical examinations, images, and AFP monitoring. 2 The mean follow-up of our patients was 92 (10-206) months. The first two cases achieved complete disease response given the full macroscopic and microscopic tumor resection. Since pancreatoblastoma is a rare pediatric pathology, it is of great importance a multidisciplinary approach, with the presence of oncologists, pathologists, surgeons and imaging specialists, to achieve a correct diagnosis and management. Financial support. Not Received.
2021-08-04T00:04:49.527Z
2020-06-28T00:00:00.000
{ "year": 2020, "sha1": "0458b73a75983feac5ce164a99df5b6a4966692a", "oa_license": "CCBYNCSA", "oa_url": "http://www.actagastro.org/numeros-anteriores/2020/Vol-50-N2/Vol50N2-PDF18.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cf863f8ff800b250e66b71cef0c2c0b231ed2d85", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73660766
pes2o/s2orc
v3-fos-license
A study on treatment of resistant mastitis in dairy cows The study was undertaken to determine the prevalence and treatment of antibiotic resistant mastitis in dairy cows. The predominant resistant causative pathogen was Escherichia coli (50.64 %) followed by S. aureus (44.25 %) and Methicillin resistant Staphylococcal aureus (5.11%).These isolates were found sensitive to gentamicin, enrofloxcain, amoxicillin+sulbactam, ceftriaxone and resistant to amoxicillin, oxytetracycline, penicillin G and oxacillin. In all the treatment groups of E. coli, S. aureus and MRSA mastitis, the post treatment pH, SCC was significantly (P < 0.01) decreased when compared to pre treatment pH, SCC values and the post treatment electrical conductivity was significantly (P < 0.01) increased when compared to pre treatment electrical conductivity value. In E. coli mastitis, treated with amoxicillin+sulbactam, ceftriaxone, enrofloxacin and gentamicin showed 74.1%, 67.75 %, 76.67 % and 64.52 % clinical recovery and in S. aureus mastitis, showed 65.25 %, 65.25 %, 72.43 % and 68.98 % clinical recovery. In MRSA mastitis, enrofloxacin was found to be highly effective in comparison to amoxicillin+sulbcactam. INTRODUCTION Mastitis is considered as most important disease affecting the productive performance of cattle world-side contributing to the economic losses (Kumar et al., 2010).In the control of mastitis, the improper use of antimicrobial agents on dairy farm animals is a major concern as it leads to the emergence of resistant zoonotic bacterial pathogens (Piddock, 1996).The emergence of antibiotic-resistance in S. aureus from mastitic dairy animals has been shown in recent years.Beta-lactam antibiotics are frequently used in mastitis therapy and the resistance is due to the production of beta-lactamases and low-affinity penicillin-binding protein, PBP2A (Olsen et al., 2006).Acquired antimicrobial resistance in bacteria is an increasing threat in human as well as in veterinary medicine.Among the various antibiotic-resistant strains the Methicillin-resistant S. aureus (MRSA) is a serious cause because of its public health significance.All MRSA were resistant to members of the penicillin family, such as ampicillin, oxacillin and penicillin.In India, high prevalence of MRSA (13.1 %) and the isolates were resistant to streptomycin, oxytetracycline, gentamicin and chloramphenicol, pristinomycin and ciprofloxacin (Kumar et al., 2011).As on date β-lactamase resistant penicillins such as methicillin and oxacillin are not used in dairy cows except for cloxacillin in some of the products used for intramammary administration (Turutoglu et al., 2006).Comprehensive information on the prevalence of antimicrobial resistance in bovine mastitis pathogens in milk and their management is lacking in India.Enrofloaxacin was found to be most effective against MRSA (Kenar et al., 2012).Keeping this in view, the present study, an attempt has been made to find out the treatment of resistant mastitis in dairy cows. MATERIALS AND METHODS Sampling and bacterial culture: 401 milk samples from acute mastitis cases from Large Animal Clinic Medicine Unit of Madras Veterinary College Teaching Hospital and six dairy farms in Coimbatore district were collected, transported, cultured.The Escherichia coli and Staphylococci were isolated as per the guidelines of National Mastitis Council (NMC).The S. aureus isolates were also characterized by their growth on blood agar and mannitol salt agar, the positive results for catalase and coagulase while the E. coli isolates were identified on eosin methylene blue agar, a negative oxidase test.Based on incidence of common causative pathogen and sensitivity tests, isolates were categorized as resistant i.e exhibiting in vitro resistance to 1 or 2 antimicrobials and multidrugresistant i.e exhibiting in vitro resistance to 3 or more antimicrobials.Cows with resistant mastitis were grouped as follows, Group I E. coli (n=119), Group II Staphylococcus aureus (n=104) and Group III Methicillin resistant S. aureus (n=12).Antibiotic sensitivity test: Antimicrobial susceptibility testing was carried out at equivalent to 0.5 McFarland turbidity standard by agar disc diffusion method on Mueller-Hinton agar plates following the guidelines of CLSI (2008).All the bacteria isolated were tested in vitro for their sensitivity to 8 antibiotics viz., Enrofloxacin, Amoxicillin+sulbactam, Amoxicillin, Gentamicin, Ceftriaxone, Oxytetracycline, Penicillin G and Oxacillin that are commonly used in veterinary practice. PCR for identification of the mastitis causing bacteria: The isolates were confirmed to be E. coli and S. aureus by targeting the specific gene 16s-23s r RNA and gap gene as described by Riffon et al., (2001) and Yugueros et al., (2001).The Staphylococci isolates were confirmed to be MRSA by amplifying the MRSA specific genes mecA and blaZ as described by Lee, (2003) and Martineau et al. (2000), respectively.Clinical signs: The cows with acute clinical mastitis exhibiting clinical signs such as inappetance, anorexia, pyrexia, reduced rumen motility, swelling of udder, hot and painful udder.Milk colour changes include dirty white, yellowish colour, flakes, serous/ watery and blood mixed.Clinical examination was carried out as described by Boddie, (2000).pH, electrical conductivity and somatic cell count (SCC): The collected milk samples were further subjected to pH (pH strips), Electrical conductivity (DRAMINSKI mastitis detector) and somatic cell count (De Laval somatic cell counter).The SCC value > 5,00,000 cells / ml of milk with clinical signs were taken as criteria to declare the animal as clinically mastitic.The results were statistically analyzed, utilizing SPSS -Version 14 statistical software package.Treatment: Based on culture, antibiotic sensitivity test, cows were allotted the following four treatment trial.1. Gentamicin @ 4mg / kg body weight IM once daily for 7 days in E. coli (n=31) group and S. aureus (n=29) group as well as 100 mg per quarter intramammary infusion once daily for 7 days in S. aureus group.2. Ceftriaxone @ 5 mg / kg body weight IM once daily for 7 days in E. coli (n=31) group and S. aureus (n=23) group as well as Ceftriaxone 250 mg per quarter intramammary infusion once daily for 7 days in S. aureus group.3. Enrofloxacin @ 5 mg / kg body weight IM once daily for 7 days in E. coli (n=30) group, S. aureus (n=29) group and MRSA (n=6) group as well as 100 mg per quarter intramammary infusion once daily for 7 days in S. aureus and Methicillin resistant S. aureus (MRSA) group.4. Amoxycillin + Sulbactam @ 10 mg / kg body weight IM twice daily for 7 days in E. coli (n=27) group, S. aureus (n=23) group and MRSA (n=6) group as well as 300 mg per quarter intramammary infusion once daily for 7 days in S. aureus and Methicillin resistant S. aureus (MRSA) group.Depending on the severity, cows in all groups were treated with non-steroidal anti-inflammatory drug meloxicam IV @ 0.5 mg/kg body wt daily for 1-5 days and chlorpheniramine maleate @ 0.5 mg /kg body wt IM daily for 5 days.In E. coli group Normal Saline was administered @10 ml/kg body weight IV daily for 1-5 days depending on the severity.Post treatment assessment was carried out after 7 days, based on milk pH, electrical conductivity, somatic cell count and clinical improvement. RESULTS Antibiotic resistant mastitis was detected in 235 out of 401 cows accounting to 56.1 %.The predominant resistant causative pathogen was E. coli (50.64 %) followed by S. aureus (44.25 %) and MRSA (5.11 %).Antibiotic sensitivity test: E. coli showed more sensitivity to enrofloxacin (79 %) followed by amoxicillin and sulbactam (74 %), gentamicin (73.1 %) and ceftriaxone (69 %).The isolates had highest resistance to penicillin (63 %) followed by amoxicillin (52.1 %), oxytetracycline (47.9 %) and methicillin (45.4 %).Most of the E. coli isolates (86.55 %) were found to be resistant i.e resistance to 1 or 2 of antimicrobials and few E. coli isolates (13.45 %) were found to be multi-drug resistant i.e resistance to 3 or more of antimicrobials.S. aureus isolates were most sensitive to enrofloxacin (79.8 %) followed by gentamicin (71.2 %), amoxicillin and sulbactam (69.2 %) and ceftriaxone (69.2 %).The isolates showed highest resistance to penicillin (63.5 %) followed by amoxicillin (61.5 %), oxytetracycline (49 %) and methicillin (52.9 %).Most of the S. aureus isolates (80.77 %) were found to be resistant i.e reistance to 1 or 2 of antimicrobials and few S. aureus isolates (19.23 %) were found to be multi-drug resistant i.e resistance to 3 or more of antimicrobials.MRSA showed maximum sensitivity to enrofloxacin (75 %), amoxicillin and sulbactam (75 %) followed by gentamicin (66.7 %) and ceftriaxone (58.3 %).The isolates showed highest resistance to methicillin (100 %), amoxicillin (91.7 %), followed by penicillin (83.3 %) and oxytetracycline (41.7 %).Few MRSA isolates (8.33 %) were found to be resistant i.e resistance to 1 or 2 of antimicrobials and most of the MRSA isolates (91.67 %) were found to be multi-drug resistant i.e resistance to 3 or more of antimicrobials.PCR for identification of the mastitis causing bacteria : Out of 235 milk samples, the specific target gene 16s-23s r RNA (E.coli) could be amplified from 119 isolates with a %age of positivity as 50.64 (119/235), gap gene (S.aureus) could be amplified from 104 isolates with a %age of positivity as 44.25 (104/235).Screening for the specific target gene for both mecA (MRSA) and blaZ (MRSA) resulted in positivity in 12 samples with a %age of positivity as 10.34 (12/116) among the S. aureus isolates.pH, electrical conductivity and somatic cell count (SCC): The post treatment mean ± S.E values of milk pH, Electrical conductivity and SCC in E. coli, S. aureus and MRSA isolates are as given in tables 1-3.In E. coli, S. aureus and MRSA groups, a highly significant (P < 0.01) decrease in post treatment milk pH values was observed in all treatment groups when compared to pre treatment group.In E. coli and S. aureus groups, post treatment pH value (Tables 1 and 2) of treatment groups showed lowering trend towards control group.However, In MRSA group, the post treatment pH value (Table 3) of enrofloxacin group was comparable to the control value and amoxicillin + sulbactam group was slightly above the control value.In E. coli, S. aureus and MRSA groups, a highly significant (P < 0.01) increase in post treatment milk electrical conductivity values (Tables 1, 2 and 3) was observed in all treatment groups when compared to pre treatment values.However, in E. coli, S. aureus groups, there was no significant difference between different treatment groups in post treatment values and post treatment electrical conductivity value of treatment groups showed increasing trend towards control group.In MRSA group, post treatment electrical conductivity value of enrofloxacin group showed increasing trend when compared to control group.Even though significant increase in post treatment electrical conductivity was observed in amoxicillin+sulbactam group, it did not reach the values towards control group.In E. coli, S. aureus and MRSA groups, a highly significant (P < 0.01) decrease in post treatment milk SCC values (Tables 1, 2 and 3) was observed in all treatment groups when compared to pre treatment values.However, in E. coli, S. aureus groups, there was no significant difference between different treatment groups in post treatment value and post treatment SCC value of treatment groups showed lowering trend towards control group.In MRSA group, post treatment SCC value of enrofloxacin group showed lowering trend towards control group.Whereas post treatment SCC value of amoxicillin+sulbactam group was slightly above the control value. DISCUSSION Mastitis is the most common cause for antibiotic use in dairy herds.However, improper use of antibiotics creates problems such as the emergence of bacterial resistance to antibiotics.The present study has demonstrated the existence of alarming levels of resistance of E .coli, S. aureus and MRSA to commonly used antimicrobial agents in the study farms and the results are in accordance with reports from earlier studies in other countries.Edward et al., (2002) suggesting a possible development of resistance from prolonged and indiscriminate usage of some antimicrobials.Systemic application of an in vitro antibiotic susceptibility test prior to the use of antibiotics in the treatment of intra-mammary infections will prevent the antibiotic resistance. Based on the antibiotic susceptibility test E. coli, S. aureus and MRSA showed maximum sensitivity to enrofloxacin, amoxicillin + sulbactam followed by gentamicin and ceftriaxone.These susceptible antibiotic drugs will be used as the effective drugs against E. coli, S. aureus and MRSA resistant isolates. In the present study, in all the treatment groups of E. coli and S. aureus mastitis, the post treatment pH, SCC was significantly decreased when compared to pre treatment pH, SCC values indicated that the treatment was effective in controlling the inflammation.The post treatment electrical conductivity was significantly increased when compared to pre treatment electrical conductivity value. In the present study, cows affected with E. coli and S. aureus mastitis treated with amoxycillin+sulbactam, ceftriaxone, enrofloxacin and gentamicin showed uniform improvement in clinical mastitis.In E. coli mastitis, treated with amoxicillin+sulbactam, ceftriaxone, enrofloxacin and gentamicin showed 74.1 %, 67.75 %, 76.67 % and 64.52 % clinical recovery and in S. aureus mastitis, showed 65.25 %, 65.25 %, 72.43 % and 68.98 % clinical recovery and this might be due to isolates were found to be resistant i.e. resistance to 1 or 2 of antimicrobials.The present observation was in agreement with Karthikeyan, (2003) who reported that the most sensitive antimicrobial agent against gram negative pathogens (E. coli ) was found to be enrofloxacin (100 %) followed by ciprofloxacin and gentamicin and sensitive antimicrobial agent against gram positive pathogens were found to be gentamicin followed by ciprofloxacin and enrofloxacin.Evira (2009) (average) respectively (Tufani et al., 2012). Ceftriaxone is a third generation cephalosporin and has remarkable activity against Enterobacteriaceae (Prescott and Baggot, 1994) and Staphylococcus Spp.(Sumathi et al., 2008). In MRSA mastitis, treated with amoxicillin+sulbactam and enrofloxacin showed 50 % clinical recovery and lower clinical recovery compared to E. coli and S. aureus mastitis might be due to multi-drug resistant i.e resistance to 3 or more of antimicrobials. Based on the post treatment pH, EC and SCC values, enrofloaxacin was found most effective antibiotic against MRSA (Kenar et al., 2012).This might be due to immunomodulatory (Hoeben et al., 1997), concentration dependent and post antibiotic effect.Hui et al. (2013) also reported that a good bactericidal activity in vitro was achieved for AMX/SUL (4:1) combination against common mastitis pathogens in cows.In the present study, Amoxicillin+ sulbactam was sensitive in vitro (6 out of 6 cases) but it had a poor efficacy in vivo (3 out of 6 cases).The lower efficacy of amoxicillin + sulbactam noticed in the current study might be due to development of resistance by bacterial strains which was confirmed by PCR by targeting specific gene mecA and blaZ.Loeffler and Lloyd, (2010) opined that detection of mecA and blaZ gene by PCR was gold standard test for confirmation of methicillin resistance. Local clinical signs, such as swelling, pain and firmness of the inflamed mammary quarters, were less severe in the treated cows (Hoeben et al., 2000). Conclusion It was concluded that systemic application of an in vitro antibiotic susceptibility test prior to the use of antibiotics in the treatment of intra-mammary infections will prevent the antibiotic resistance.Cows affected with E. coli and S. aureus mastitis treated with amoxycillin+sulbactam, ceftriaxone, enrofloxacin and gentamicin showed uniform improvement.In MRSA mastitis, treated with amoxicillin+sulbactam and enrofloxacin showed 50 % clinical recovery. Table 1 . Comparison of post treatment values of pH, EC and SCC in E. coli mastitis among different groups of antibiotics. Table 2 . Comparison of post treatment values of pH, EC and SCC in S. aureus mastitis among different groups of antibiotics. Table 3 . Comparison of post treatment values of pH, EC and SCC in MRSA mastitis among different groups of antibiotics.Mean bearing the same superscript in the same row do not differ significantly; ** Highly significant (P<0.01)
2018-12-21T04:30:54.944Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "ef111fa9592874f5bbe80e36b709a8ed616132a4", "oa_license": "CCBYNC", "oa_url": "https://journals.ansfoundation.org/index.php/jans/article/download/537/495", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ef111fa9592874f5bbe80e36b709a8ed616132a4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
10503028
pes2o/s2orc
v3-fos-license
Event Linking with Sentential Features from Convolutional Neural Networks Coreference resolution for event mentions enables extraction systems to process document-level information. Current systems in this area base their decisions on rich semantic features from various knowledge bases, thus restricting them to domains where such external sources are available. We propose a model for this task which does not rely on such features but instead utilizes sentential features coming from convolutional neural networks. Two such networks first process coreference candidates and their respective context, thereby generating latent-feature representations which are tuned towards event aspects relevant for a linking decision. These representations are augmented with lexical-level and pairwise features, and serve as input to a trainable similarity function producing a coreference score. Our model achieves state-of-the-art performance on two datasets, one of which is publicly available. An error analysis points out directions for further research. Introduction Event extraction aims at detecting mentions of realworld events and their arguments in text documents of different domains, e.g., news articles. The subsequent task of event linking is concerned with resolving coreferences between recognized event mentions in a document, and is the focus of this paper. Several studies investigate event linking and related problems such as relation mentions spanning multiple sentences. Swampillai and Stevenson (2010) find that 28.5 % of binary relation mentions in the MUC 6 dataset are affected, as are 9.4 % of relation mentions in the ACE corpus from 2003. Ji and Grishman (2011) estimate that 15 % of slot fills in the training data for the "TAC 2010 KBP Slot Filling" task require cross-sentential inference. To confirm these numbers, we analyzed the event annotation of the ACE 2005 corpus and found that approximately 23 % of the event mentions lack arguments which are present in other mentions of the same event instance in the respective document. These numbers suggest that event linking is an important task. Previous approaches for modeling event mentions in context of coreference resolution (Bejan and Harabagiu, 2010;Sangeetha and Arock, 2012; make either use of external feature sources with limited cross-domain availability like WordNet (Fellbaum, 1998) and FrameNet (Baker et al., 1998), or show low performance. At the same time, recent literature proposes a new kind of feature class for modeling events (and relations) in order to detect mentions and extract their arguments, i.e., sentential features from event-/relationmention representations that have been created by taking the full extent and surrounding sentence of a mention into account (Zeng et al., 2014;Nguyen and Grishman, 2015;dos Santos et al., 2015;. Their promising results motivate our work. We propose to use such features for event coreference resolution, hoping to thereby remove the need for extensive external semantic features while preserving the current stateof-the-art performance level. Our contributions in this paper are as follows: We design a neural approach to event linking which in a first step models intra-sentential event mentions via the use of convolutional neural networks for the integration of sentential features. In the next step, our model learns to make coreference decisions for pairs of event mentions based on the previously generated representations. This approach does not rely on external semantic features, but rather employs a combination of local and sentential features to describe individual event mentions, and combines these intermediate event representations with standard pairwise features for the coreference decision. The model achieves state-of-the-art performance in our experiments on two datasets, one of which is publicly available. Furthermore, we present an analysis of the system errors to identify directions for further research. Problem definition We follow the notion of events from the ACE 2005 dataset (LDC, 2005;Walker et al., 2006). Consider the following example: British bank Barclays had agreed to buy Spanish rival Banco Zaragozano for 1.14 billion euros. The combination of the banking operations of Barclays Spain and Zaragozano will bring together two complementary businesses and will happen this year, in contrast to Barclays' postponed merger with Lloyds. 1 Processing these sentences in a prototypical, ACE-style information extraction (IE) pipeline would involve (a) the recognition of entity mentions. In the example, mentions of entities are underlined. Next, (b) words in the text are processed as to whether they elicit an event reference, i.e., event triggers are identified and their semantic type is classified. The above sentences contain three event mentions with type Business.Merge-Org, shown in boldface. The task of event extraction further requires that (c) participants of recognized events are determined among the entity mentions in the same sentence, i.e., an event's arguments are identified and their semantic role wrt. the event is classified. The three recognized event mentions are: E1: buy(British bank Barclays, Spanish rival Banco Zaragozano, 1.14 billion euros) E2: combination(Barclays Spain, Zaragozano, this year) E3: merger(Barclays, Lloyds) Often, an IE system involves (d) a disambiguation step of the entity mentions against one another in the same document. This allows to identify the three mentions of "Barclays" in the text as referring to the same real-world entity. The analogous task on the level of event mentions is called (e) event linking (or: event coreference resolution) and is the focus of this paper. Specifically, the task is 1 Based on an example in (Araki and Mitamura, 2015). to determine that E3 is a singleton reference in this example, while E1 and E2 are coreferential, with the consequence that a document-level event instance can be produced from E1 and E2, listing four arguments (two companies, buying price, and acquisition date). Model design This section first motivates the design decisions of our model for event linking, before going into details about its two-step architecture. Event features from literature So far, a wide range of features has been used for the representation of events and relations for extraction (Zhou et al., 2005;Mintz et al., 2009;Sun et al., 2011;Krause et al., 2015) and coreference resolution (Bejan and Harabagiu, 2010;Lee et al., 2012;Araki and Mitamura, 2015;Cybulska and Vossen, 2015) purposes. The following is an attempt to list the most common classes among them, along with examples: • lexical: surface string, lemma, word embeddings, context around trigger • syntactic: depth of trigger in parse tree, dependency arcs from/to trigger • discourse: distance between coreference candidates, absolute position in document • semantic (intrinsic): comparison of event arguments (entity fillers, present roles), event type of coreference candidates • semantic (external): coreference-candidates similarity in lexical-semantic resources (WordNet, FrameNet) and other datasets (VerbOcean corpus), enrichment of arguments with alternative names from external sources (DBpedia, Geonames) While lexical, discourse, and intrinsic-semantic features are available in virtually all application scenarios of event extraction/linking, and even syntactic parsing is no longer considered an expensive feature source, semantic features from external knowledge sources pose a significant burden on the application of event processing systems, as these sources are created at high cost and come with limited domain coverage. Fortunately, recent work has explored the use of a new feature class, sentential features, for tackling relation-/event-extraction related tasks with neural networks (Zeng et al., 2014;Nguyen and Grishman, 2015;dos Santos et al., 2015;. These approaches have shown that processing sentences with neural models yields representations suitable for IE, which motivates their use in our approach. The second part is fed with two such event-mention representations plus a number of pairwise features for the input event-mention pair, and calculates a coreference score. Data properties A preliminary analysis of one dataset used in our experiments (ACE ++ ; see Section 5) further motivates the design of our model. We found that 50.97 % of coreferential event-mentions pairs share no arguments, either by mentioning distinct argument roles or because one/both mentions have no annotated arguments. Furthermore, 47.29 % of positive event-mention pairs have different trigger words. It is thus important to not solely rely on intrinsic event properties in order to model event mentions, but to additionally take the surrounding sentence's semantics into account. Another observation regards the distance of coreferential event mentions in a document. 55.42% are more than five sentences apart. This indicates that a locality-based heuristic would not perform well and also encourages the use of sentential features for making coreference decisions. Learning event representations The architecture of the model ( Figure 1) is split into two parts. The first one aims at adequately representing individual event mentions. As is common in literature, words of the whole sentence of an input event mention are represented as real-valued vectors v i w of a fixed size d w , with i being a word's position in the sentence. These word embeddings are updated during model training and are stored in a matrix W w ∈ R dw×|V | ; |V | being the vocabulary size of the dataset. Furthermore, we take the relative position of tokens with respect to the mention into account, as suggested by (Collobert et al., 2011;Zeng et al., 2014). The rationale is that while the absolute position of learned features in a sentence might not be relevant for an event-related decision, the position of them wrt. the event mention is. Embeddings v i p of size d p for relative positions of words are generated in a way similar to word embeddings, i.e., by table lookup from a matrix W p ∈ R dp×smax * 2−1 of trainable parameters. Again i denotes the location of a word in a sentence; s max is the maximum sentence length in the dataset. Embeddings for words and positions are concatenated into vectors v i t of size d t = d w + d p , this means that now every word in the vocabulary has a different representation for each distinct distance with which it occurs to an event trigger. A sentence with s words is represented by a matrix of dimensions s × d t . This matrix serves as input to a convolution layer. In order to compress the semantics of s words into a sentence-level feature vector with constant size, the convolution layer applies d c filters to each window of n consecutive words, thereby calculating d c features for each ngram of a sentence. For a single filter w c ∈ R n * dt and particular window of n words starting at position i, this operation is defined as where t for the words in the window, b c is a bias, and relu is the activation function of a rectified linear unit. In Figure 1, d c = 3 and n = 2. In order to identify the most indicative features in the sentence and to introduce invariance for the absolute position of these, we feed the n-gram representations to a max-pooling layer, which identifies the maximum value for each filter. We treat n-grams on each side of the trigger word separately during pooling, which allows the model to handle multiple event mentions per sentence, similar in spirit to . The pooling step for a particular convolution fil- where i runs through the convolution windows of k. The output of this step are sentential features v sent ∈ R 2 * dc of the input event mention: Additionally, we provide the network with trigger-local, lexical-level features by concatenating v sent with the word embeddings v (·) w of the trigger word and its left and right neighbor, resulting in v sent+lex ∈ R 2 * dc+3 * dw . This encourages the model to take the lexical semantics of the trigger into account, as these can be a strong indicator for coreference. The result is processed by an additional hidden layer, generating the final event-mention representation v e with size d e used for the subsequent event-linking decision: Learning coreference decisions The second part of the model (Figure 1b) processes the representations for two event mentions v 1 e , v 2 e , and augments these with pairwise comparison features v pairw to determine the compatibility of the event mentions. The following features are used, in parentheses we give the feature value for the pair E1, E2 from the example in Section 1: • Coarse-grained and/or fine-grained event type agreement (yes, yes) • Antecedent event is in first sentence (yes) • Bagged distance between event mentions in #sentences/#intermediate event mentions (1, 0) • Agreement in event modality (yes) • Overlap in arguments (two shared arguments) The concatenation of these vectors is processed by a single-layer neural network which calculates a distributed similarity of size d sim for the two event mentions: The use of the square function as the network's non-linearity is backed by the intuition that for measuring similarity, an invariance under polarity changes is desirable. Having d sim similarity dimensions allows the model to learn multiple similarity facets in parallel; in our experiments, this setup outperformed model variants with different activation functions as well as a cosine-similarity based comparison. To calculate the final output of the model, v sim is fed to a logistic regression classifier, whose output serves as the coreference score: We train the model parameters by minimizing the logistic loss over shuffled minibatches with gradient descent using Adam (Kingma and Ba, 2014). Example generation and clustering We investigated two alternatives for the generation of examples from documents with recognized event mentions. Figure 2 shows the strategy we found to perform best, which iterates over the event mentions of a document and pairs each mention (the "anaphors") with all preceding ones (the "antecedent" candidates). This strategy applies to both training and inference time. 1: procedure GENERATECLUSTERS(P d , score): strategy performed worse than the less elaborate algorithm in Figure 2. The pairwise coreference decisions of our model induce a clustering of a document's event mentions. In order to force the model to output a consistent view on a given document, a strategy for resolving conflicting decisions is needed. We followed the strategy detailed in Figure 3, which builds the transitive closure of all positive links. Additionally, we experimented with Ng and Gardent (2002)'s "BestLink" strategy, which discards all but the highest-scoring antecedent of an anaphoric event mention. reported that for event linking, BestLink outperforms naive transitive closure, however, in our experiments (Section 5) we come to a different conclusion. Experimental setting, model training We implemented our model using the TensorFlow framework (Abadi et al., 2015, v0.6), and chose the ACE 2005 dataset (Walker et al., 2006, later: ACE) as our main testbed. The annotation of this corpus focuses on the event types Conflict.Attack, Movement.Transport, and Life.Die reporting about terrorist attacks, movement of goods and people, and deaths of people; but also contains many more related event types as well as mentions of businessrelevant and judicial events. The corpus consists of merely 599 documents, which is why we create a second dataset that encompasses these documents and additionally contains 1351 more web documents annotated in an analogous fashion with the same set of event types. We refer to this second dataset as ACE ++ . Both datasets are split 9:1 into a development (dev) and test partition; we further split dev 9:1 into a training (train) and validation (valid) partition. Table 1 lists statistics for the datasets. There are a number of architectural alternatives in the model as well as hyperparameters to optimize. Besides varying the size of intermediate representations in the model (d w , d p , d c , d e , d sim ), we experimented with different convolution window sizes n, activation functions for the similarityfunction layer in model part (b), whether to use the dual pooling and final hidden layer in model part (a), whether to apply regularization with 2 penalties or Dropout, and parameters to Adam (η, β 1 , β 2 , ). We started our exploration of this space of possibilities from previously reported hyperparameter values (Zhang and Wallace, 2015; and followed a combined strategy of random sampling from the hyperparameter space (180 points) and line search. Optimization was done by training on ACE ++ train and evaluating on ACE ++ valid . The final settings we used for all following experiments are listed in Table 2. W w is initialized with pre-trained embeddings of (Mikolov et al., 2013) Evaluation This section elaborates on the conducted experiments. First, we compare our approach to state-ofart systems on dataset ACE, after which we report experiments on ACE ++ , where we contrast variations of our model to gain insights about the impact of the utilized feature classes. We conclude this section with an error analysis. (Recasens and Hovy, 2011) has the highest validity, as it balances the impact of positive and negative event-mention links in a document. Negative links and consequently singleton event mentions are more common in this dataset (more than 90 % of links are negative). As Recasens and Hovy (2011) point out, the informativeness of metrics like MUC (Vilain et al., 1995), B-CUBED (Bagga and Baldwin, 1998), and the naive positivelink metric suffers from such imbalance. We still add these metrics for completeness, and because BLANC scores are not available for all systems. Unfortunately, there are two caveats to this comparison. First, while a 9:1 train/test split is the commonly accepted way of using ACE, the exact documents in the partitions vary from system to system. We are not aware of any publicized split from previous work on event linking, which is why we create our own and announce the list of documents in ACE valid /ACE test at https://git.io/vwEEP. Second, published methods follow different strategies regarding preprocessing components. While all systems in Table 3 use gold-annotated eventmention triggers, Bejan and Harabagiu (2010) and use a semantic-role labeler and other tools instead of gold-argument information. We argue that using full gold-annotated event men-tions is reasonable in order to mitigate error propagation along the extraction pipeline and make performance values for the task at hand more informative. Comparison to state-of-the-art on ACE We beat 's system in terms of F1 score on BLANC, MUC, and positive-links, while their system performs better in terms of B-CUBED. Even when taking into account the caveats mentioned above, it seems justified to assess that our model performs in general on-par with their stateof-the-art system. Their approach involves randomforest classification with best-link clustering and propagation of attributes between event mentions, and is grounded on a manifold of external feature sources, i.e., it uses a "rich set of 105 semantic features". Thus, their approach is strongly tied to domains where these semantic features are available and is potentially hard to port to other text kinds. In contrast, our approach does not depend on resources with restricted domain availability. Bejan and Harabagiu (2010) propose a nonparametric Bayesian model with standard lexicallevel features and WordNet-based similarity between event elements. We outperform their system in terms of B-CUBED and positive-links, which indicates that their system tends to over-merge event mentions, i.e., has a bias against singletons. They use a slightly bigger variant of ACE with 46 additional documents in their experiments. Sangeetha and Arock (2012) hand-craft a similarity metric for event mentions based on the number of shared entities in the respective sentences, lexical terms, synsets in WordNet, which serves as input to a mincut-based cluster identification. Their system performs well in terms of B-CUBED F1, however their paper provides few details about the exact experimental setup. Another approach with results on ACE was presented by , who employ a maximum-entropy classifier with agglomerative clustering and lexical, discourse, and semantic features, e.g., also a WordNet-based similarity mea- Table 5: Impact of feature classes; "Pw" is short for pairwise features, "Loc" refers to trigger-local lexical features, "Sen" corresponds to sentential features. sure. However, they report performance using a threshold optimized on the test set, thus we decided to not include the performance here. Further evaluation on ACE and ACE ++ We now look at several aspects of the model performance to gain further insights about it's behavior. Table 4 shows the impact of increasing the amount of training data (ACE → ACE ++ ). This increase (rows 1, 3) leads to a boost in recall, from 75.16% to 83.21%, at the cost of a small decrease in precision. This indicates that the model can generalize much better using this additional training data. Looking into the use of the alternative clustering strategy BestLink recommended by , we can make the expected observation of a precision improvement (row 1 vs. 2; row 3 vs. 4), due to fewer positive links being used before the transitive-closure clustering takes place. This is however outweighed by a large decline in recall, resulting in a lower F1 score (73.31 → 72.19; 76.90 → 71.09). The better performance of BestLink in Liu et al.'s model suggests that our model already weeds out many low confidence links in the classification step, which makes a downstream filtering unnecessary in terms of precision, and even counter-productive in terms of recall. Table 6: Event-linking performance of our model against naive baselines. Table 5 shows our model's performance when particular feature classes are removed from the model (with retraining), with row 3 corresponding to the full model as described in Section 3. Unsurprisingly, classifying examples with just pairwise features (row 1) results in the worst performance, and adding first trigger-local lexical features (row 2), then sentential features (row 3) subsequently raises both precision and recall. Just using pairwise features and sentential ones (row 4), boosts precision, which is counter-intuitive at first, but may be explained by a different utilization of the sententialfeature part of the model during training. This part is then adapted to focus more on the trigger-word aspect, meaning the sentential features degrade to trigger-local features. While this allows to reach higher precision (recall that Section 3 finds that more than fifty percent of positive examples have trigger-word agreement), it substantially limits the model's ability to learn other coreference-relevant aspects of event-mention pairs, leading to low recall. Further considering rows 5 & 6, we can conclude that all feature classes indeed positively contribute to the overall model performance. Impact of feature classes Baselines The result of applying three naive baselines to ACE ++ is shown in Table 6. The all singletons/one instance baselines predict every input link to be negative/positive, respectively. In particular the all-singletons baseline performs well, due to the large fraction of singleton event mentions in the dataset. The third baseline, same type, predicts a positive link whenever there is agreement on the event type, namely, it ignores the possibility that there could be multiple event mentions of the same type in a document which do not refer to the same real-world event, e.g., referring to different terrorist attacks. This baseline also performs quite well, in particular in terms of recall, but shows low precision. Error analysis We manually investigated a sample of 100 false positives and 100 false negatives from ACE ++ in order to get an understanding of system errors. It turns out that a significant portion of the false negatives would involve the resolution of a pronoun to a previous event mention, a very hard and yet unsolved problem. Consider the following examples: • "It's crazy that we're bombing Iraq. It sickens me." • "Some of the slogans sought to rebut war supporters' arguments that the protests are unpatriotic. [...] Nobody questions whether this is right or not. In both examples, the event mentions (trigger words in bold font) are gold-annotated as coreferential, but our model failed to recognize this. Another observation is that for 17 false negatives, we found analogous cases among the sampled false positives where annotators made a different annotation decision. Consider these examples: Each bullet point has two event mentions (in bold font) taken from the same document and referring to the same event type, i.e., Personnel.Elect. While in the first example, the annotators identified the mentions as coreferential, the second pair of mentions is not annotated as such. Analogously, 22 out of the 100 analyzed false positives were cases where the misclassification of the system was plausible to a human rater. This exemplifies that this task has many boundary cases were a positive/negative decision is hard to make even for expert annotators, thus putting the overall performance of all models in Table 3 in perspective. Related work We briefly point out other relevant approaches and efforts from the vast amount of literature. Event coreference In addition to the competitors mentioned in Section 5, approaches for event linking were presented, e.g., by , who determine link scores with hand-crafted compatibility metrics for event mention pairs and a maximum-entropy model, and feed these to a spectral clustering algorithm. A variation of the eventcoreference resolution task extends the scope to cross-document relations. Cybulska and Vossen (2015) approach this task with various classification models and propose to use a type-specific granularity hierarchy for feature values. Lee et al. (2012) further extend the task definition by jointly resolving entity and event coreference, through several iterations of mention-cluster merge operations. Sachan et al. (2015) describe an active-learning based method for the same problem, where they derive a clustering of entities/events by incorporating bits of human judgment as constraints into the objective function. Araki and Mitamura (2015) simultaneously identify event triggers and disambiguate them wrt. one another with a structured-perceptron algorithm. Resources Besides the ACE 2005 corpus, a number of other datasets with event-coreference annotation have been presented. reports on the annotation process of two corpora from the domains of "violent events" and biographic texts; to our knowledge neither of them is publicly available. OntoNotes (Weischedel et al., 2013) comprises different annotation layers including coreference (Pradhan et al., 2012), however intermingles entity and event coreference. A series of releases of the EventCorefBank corpus (Bejan and Harabagiu, 2010; Lee et al., 2012;Cybulska and Vossen, 2014) combine linking of event mentions within and across documents, for which report a lack of completeness on the withindocument aspect. The ProcessBank dataset (Berant et al., 2014) provides texts with event links from the difficult biological domain. Other A few approaches to the upstream task of event extraction, while not considering withindocument event linking, still utilize discourse-level information or even cross-document inference. For example, Liao and Grishman (2010) showed how the output of sentence-based classifiers can be filtered wrt. discourse-level consistency. Yao et al. (2010) resolved coreferences between events from different documents in order to make a global extraction decision, similar to (Ji and Grishman, 2008) and (Li et al., 2011). In addition to convolutional neural networks, more types of neural architectures lend themselves to the generation of sentential features. Recently many recursive networks and recurrent ones have been proposed for the task of relation classification, with state-of-the-art results (Socher et al., 2012;Hashimoto et al., 2013;Ebrahimi and Dou, 2015;. Conclusion Our proposed model for the task of event linking achieves state-of-the-art results without relying on external feature sources. We have thus shown that low linking performance, coming from a lack of semantic knowledge about a domain, is evitable. In addition, our experiments give further empirical evidence for the usefulness of neural models for generating latent-feature representations for sentences. There are several areas for potential future work. As next steps, we plan to test the model on more datasets and task variations, i.e., in a crossdocument setting or for joint trigger identification and coreference resolution. On the other hand, separating anaphoricity detection from antecedent scoring, as is often done for the task of entity coreference resolution (e.g., by Wiseman et al. (2015)), might result in performance gains; also the generation of sentential features from recurrent neural networks seems promising. Regarding our mediumterm research agenda, we would like to investigate if the model can benefit from more fine-grained information about the discourse structure underlying a text. This could guide the model when encountering the problematic case of pronoun resolution, described in the error analysis.
2016-08-09T08:50:54.084Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "a4f5535c35cac2cc7b9d911d7000d17863777796", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/K16-1024.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "e5b64e77329fdab64f9c02651bb6a22838d29e91", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1358313
pes2o/s2orc
v3-fos-license
Preventing panic disorder: cost-effectiveness analysis alongside a pragmatic randomised trial Background Panic disorder affects many people, is associated with a formidable disease burden, and imposes costs on society. The annual influx of new cases of panic disorder is substantial. From the public health perspective it may therefore be a sound policy to reduce the influx of new cases, to maintain the quality of life in many people, and to avoid the economic costs associated with the full-blown disorder. For this purpose, prevention is needed. Here we present the first economic evaluation of such an intervention. Methods Randomised trial of 117 people with panic disorder symptoms not meeting the diagnostic criteria of DSM-IV panic disorder. The interventions were time-limited cognitive-behavioural therapy v care-as-usual. The central clinical endpoint was DSM-IV panic disorder-free survival over 3 months. Costs were calculated from the societal perspective. Using the bootstrap method, incremental cost-effectiveness ratios were obtained, placed in 95% confidence intervals, projected on the cost-effectiveness plane, and presented as acceptability curves. Results The median incremental cost-effectiveness ratio is €6,198 (95% CI 2,435 – 60,731) per PD-free survival gained, which has a likelihood of 75.2% of being more acceptable from a cost-effectiveness point of view than care-as-usual when a willingness-to-pay ceiling is assumed of €10,000 per PD-free survival. The most significant cost driver was therapists' time. A sensitivity analysis indicated that cost-effectiveness improves when the number of therapist hours is reduced. Conclusion This is the first economic evaluation alongside a prevention trial in panic disorder. The small sample (n = 117) and the short time horizon of 3 months preclude firm conclusions, but our findings suggest that the intervention may be acceptable from a cost-effectiveness point of view, especially when therapist involvement can be kept minimal. Nevertheless, our results must await replication in a larger trial with longer follow-up times before we can confidently recommend implementation of the intervention on a broad scale. In the light of our findings and given the burden of panic disorder, such a new trial is well worth the effort. Trial registration Current Controlled Trials ISRCTN33407455. implementation of the intervention on a broad scale. In the light of our findings and given the burden of panic disorder, such a new trial is well worth the effort. Trial registration: Current Controlled Trials ISRCTN33407455. Background Panic disorder (PD) is characterised by a substantial influx of new cases. This influx occurs at a rate of 780 new (ie, first-ever) cases per 100,000 person-years in the adult population of 18 -65 years in the Netherlands [1]. Their numbers should be compared with the annual number of 2,200 prevalent cases of PD in the same source population [2]. Looking at these figures one cannot escape the conclusion that 780/2,200 = 35.5% of the prevalent cases are, in fact, new cases. This casts doubts on the wisdom of relying solely on curative treatments directed at full-blown PD cases, and underscores the importance of prevention directed at sub-clinical cases, i.e. people with some PD symptoms who do not meet the diagnostic criteria for the disorder, and are thus 'at risk' of becoming cases [3]. In this context, it is worth noting that primary prevention of mental disorders appears to be a viable option [4]. Both sub-clinical and full-blown forms of panic disorder are associated with substantial costs due to excessive health care uptake, patients' out-of-pocket costs and production losses. By a conservative estimate, a full-blown PD case generates costs of about € 10,000 per patient per year, and a sub-clinical case generates € 6,000, even surpassing the costs of, for example, depressive disorder [5][6][7][8]. This suggests that offering preventive interventions for PD are likely to be cost-effective [9,10]. Against this background it was decided to develop and evaluate a brief preventive intervention based on cognitive-behavioural therapy (CBT) for panic disorder. Here, we report on the incremental cost-effectiveness ratio expressed as costs per PD-free survival time. Therapists' involvement is a significant cost driver, but can be brought under some control, for example by reducing the number of hours they invest in the intervention and by relying more on self-help on the part of the participants [11]. The latter can be done with the aid of computers [12,13]. Therefore, we conducted a sensitivity analysis for different hypothetical levels of therapist involvement: two therapists all sessions (high), one therapist all sessions (medium), one therapist only the first part of each session (low). To our knowledge this is the first economic evaluation of a preventive intervention in sub-clinical manifestations of panic disorder. Design This study was part of a larger multi-site trial. The larger trial was described in detail by Meulenbeek and colleagues [14]. In the larger trial two groups were recruited: people with relatively mild manifestations of MINI-DSM-IV panic disorder (N = 100) and people with sub-clinical manifestations not meeting the diagnostic criteria (N = 117). Here we limit attention to the latter group: people at risk of becoming cases of panic disorder. It is worth noting that the large trial was conducted with randomisation stratified for both groups. Therefore, one can consider the subset of people with sub-clinical panic disorder as constituting a trial within a trial. The study was designed as a pragmatic trial, mimicking the Dutch health care system as closely as possible in terms of patient recruitment and the methods used for intake, offering the intervention, and monitoring outcomes. Measurements were carried out at baseline (t0) and after three months (t1). In the treatment arm an extended follow-up was conducted after 6 months (t2) to monitor effect maintenance over time. The design was approved by an independent medical ethics committee (METIGG), and the trial was registered at ISRCTN under number 33407455 before commencement. Recruitment Participants were recruited through media announcements and via the internet. People who expressed interest underwent intake at any of the 17 participating regional mental health services. Participants had to meet the following criteria for inclusion: age over 18 years, presenting with symptoms of PD falling below the cut-off of 13 on the Panic Disorder Severity Scale (PDSS) [15], no current treatment for PD related complaints, no illness requiring immediate medical attention, and able to function independently as well as in a group. In addition, each of the candidates received a diagnostic interview with the Mini International Neuropsychiatric Interview, MINI [16]. This was done to ascertain the DSM-IV PD status [17], assess the presence of co-morbid agoraphobia, and to exclude the presence of major depressive disorder. The participants had to give written informed consent before entering the trial. See Figure 1 for the flow of participants through the trial. Power With n = 139 per condition, the original trial was powered to detect a clinically relevant difference between the conditions corresponding to a mean standardised difference score (Cohen's d) of at least 0.35 in a 2-sided test at α = 0.05 and a power of (1-β) = 0.80. However, the present study is based on only a subset of the data, and is underpowered. Therefore, we refrain from hypothesis testing. Instead we present outcomes in probabilistic terms, e.g. the likelihood (in %) that the intervention is superior to care-as-usual in terms of avoiding new onsets of panic disorder in a cost-effective way. Randomisation Concealed randomisation was conducted centrally by an independent third party who was 'blind' with respect to the participants. Participants were randomised in blocks of 2 with equal probability to either care-as-usual (N = 108, of these N = 58 with sub-clinical PD), or to the intervention (N = 109, of these N = 59 with sub-clinical PD). As said, randomisation was stratified for sub-clinical v clinical case levels of PD, and also with regard to presence v absence of co-occurring agoraphobia. The latter was done because it was assumed that agoraphobia is a prognostically relevant factor for outcome. Conditions The experimental condition was a time-limited cognitivebehavioural preventive intervention for panic disorder: the Don't Panic course [18]. The choice for cognitivebehavioural therapy (CBT) was based on the understanding that CBT is the most cost-effective intervention for PD in the curative setting [19,20]. The intervention consisted of 8 sessions of 2 hours each, followed by one booster session, again of 2 hours. The booster session was offered three months after completion of the course. The course was offered by a prevention worker and a clinician to groups of 10 (9 to 12) adults. The prevention workers and clinicians were all experienced in giving CBT-based interventions. In addition, they received training in offering the Don't Panic course, and they were to adhere to the treatment protocol [18]. Participants received an accompanying course book [21]. The intervention was well structured and consisted of psycho-education about anxiety and panic attacks, changing life-style, managing stress, relaxation training, cognitive restructuring, interoceptive exposure and in vivo exposure. Each session consisted of a review of homework assignments, followed by feedback, rehearsals, information about the upcoming topic and practical skill-training. The intervention was extensively pilot tested before entering the clinical trial stage [22]. The control condition consisted of people randomised to a waiting list. They entered the waiting list on the understanding that they would receive the Don't Panic interven-tion after 3 (max. 6) months. Waitlisted participants were free to make use of health services and take medication if so required. The control condition could therefore be described as care-as-usual (CAU), but it is CAU with one exception: people were expecting to receive help for their PD symptoms in the near future. Outcome The central clinical endpoint was DSM-IV panic disorder status (APA, 1994) at follow-up as measured with the Mini International Neuropsychiatric Interview, MINI [16]. In order to monitor effect maintenance over time the self-report version of Panic Disorder Severity Scale, PDSS-SR [15] was used to measure severity of panic and agoraphobic symptoms, which was done at t0, t1 and t2. Costs Data on costs were collected with the Trimbos Institute and institute of Medical Technology Assessment Cost questionnaire for Psychiatry, TIC-P [23]. The TIC-P was the most frequently used heath service receipt questionnaire in the Netherlands. It used a self-report format about resource use in the last four weeks. It also contained a section on productivity losses due to absenteeism and working less efficient while at work but not feeling well. All costs that were relevant from the societal perspective were included. These costs can be grouped as follows: • Costs of health service uptake in primary care and outpatient mental health services, including transport by ambulance, visits to emergency departments and use of ECG when tachycardia or myocardial infarction was suspected (see Table 1 for details). • Costs of medication, specifically benzodiazepines, tranquilisers and sleep medication (calculated as cost price per standard daily dose as obtained from the Pharmaceutical Compass at http://www.fk.cvz.nl, plus 6% value added tax, multiplied by the number of prescription days, plus the pharmacist's dispensing costs of €6.45 per prescription). • Patients' out-of-pocket costs for making visits to health services, i.e. costs of travel, parking and time costs (see Table 1 for details). • Costs related to production losses due to absenteeism [24,25] and lesser efficiency while at work but not feeling well [26] were valued at €33.90 per lost hour in paid work [27] and in the domestic sphere at €8.30 per lost hour, equivalent to the cost of one hour domestic help [27]. • Intervention costs were calculated as the costs of therapists' time (€124 per hour) devoted to the intake (1 hour), 8 sessions plus one booster session (2 hours each), administration and preparation (1 hour per session), the intervention protocol (€45 per therapist), and the course book (€25 per participant). A naturalistic study showed that the number of participants in the Don't Panic courses was on average ten participants [22]. Accordingly, the perparticipant costs of the intervention were calculated to be €750. • Because the number of hours invested by therapists in the intervention was the single largest cost-driver, the perpatient costs for the hypothetical scenarios when therapists' involvement is reduced to 50% and 25% were calculated at €375 and €190 for both lower intensity levels, and these figures were used in the sensitivity analysis. The costs were calculated in accordance with the pertinent Dutch guideline [27] and they reflect the full economic costs of the services. The sum of all costs is called 'total costs', and is expressed as monthly per capita costs in Euro (€). The reference year is 2003. For that year, the Organisation for Economic Co-operation and Development (OECD) equates US$1 with €0.932, taking into account both the currency exchange rate and the purchasing power in the US and in the Netherlands (see: http:// www.oecd.org and look for purchasing power parities, PPPs). Analyses All analyses were conducted in agreement with the intention-to-treat principle [28]. Therefore, all participants were analysed in the condition to which they were randomised, and missing endpoints at follow-up were imputed using a regression model with the best available predictors of outcome and the best predictors for dropout. The first set of predictors was required to get the most precise estimates for the missing values, the latter to correct for bias that may stem from differential loss-to-follow-up associated with t0 variables [29]. These variables were identified using logistic regression analyses with MINI PD status and dropout as the dependent variables, and age, gender, partner status, employment status were measured at baseline. With help of these variables missing endpoint were predicted and the predicted values were used to replace missing endpoints. The remainder of the analyses was conducted on the imputed data, in four steps. First, it was ascertained how many people became a case of MINI DSM-IV PD in each of the conditions at followup. This was done to assess the risk of becoming a case conditional on receiving or not receiving the intervention. The probability of not becoming a PD case is equal to 1risk, and this was interpreted as the likelihood of having a panic disorder-free survival over three months. The incre- mental effectiveness was computed as the difference between the probabilities of PD-free survival. Second, the mean total costs for each of the conditions were calculated, both at baseline and follow-up. The prepost difference in costs were computed to obtain the increase (or decrease) of costs over time within each of the conditions. The incremental cost-effectiveness ratio (ICER) was subsequently calculated as the incremental costs for a health gain of a PD-free survival over three months [30][31][32]. Third, a scatter plot of 2,500 bootstrapped ICERs on the cost-effectiveness plane was produced, by repeatedly drawing a random sample with replacement of size n from the original trial data (also of size n), computing the ICER and plotting that ICER on the cost-effectiveness plane [33]. This helped to produce estimates of the probability that (i) better health was generated for additional costs, (ii) that the intervention was inferior relative to the control condition because less health was produced for additional costs, (iii) that less health was generated for less costs, and (iv) that the intervention dominates because better outcomes were obtained for less costs. The bootstrap analysis also helped to obtain the median ICER and its 95% confidence interval. The latter was based on the 2.5 th and 97.5 th percentile of the distribution of the 2,500 bootstrapped ICERs. Finally, we wanted to answer the question whether the incremental costs were balanced by the heath gains to such an extent that one would be willing to pay for the additional costs to receive the additional health gain. Making such a judgement was complicated by the fact that the exact willingness to pay (WTP) ceiling for a unit health gain is an unknown quantity. Instead of relying on a single WTP ceiling, a series of ceilings was used to calculate the probability that the intervention was more acceptable than care-as-usual from a cost-effectiveness point of view for each of the WTP ceilings. This was presented as an ICER acceptability curve [33,34], where increasing WTP ceilings were placed on the horizontal axis, while the probability of finding the intervention acceptable from a cost-effectiveness point of view was placed on the vertical axis. All analyses were conducted in Stata [35] and Excel Professional 2003. Three analysts independently performed the analyses to cross-check results (see Authors' Contributions). Sensitivity analysis The single most important cost-driver was therapists' time. It was also a cost-driver that can be brought under some amount of control, for example by relying more on (computer-aided) self-help, reducing the number of sessions, or increasing the number of participants. By way of sensitivity analyses all previous analyses were repeated under different time-investment scenarios. In the actual situation 2 therapists participated in all sessions during the full two-hour session. This is an intensive form of therapist guidance and we refer to the costs in this situation as 'high'. In an alternative hypothetical scenario a single therapist conducts the sessions, thus avoiding the costs of the co-therapist. We refer to this scenario as 'medium'. Finally, one therapist could spend one hour with the group of participants (not the full two-hours per session), which would further cut down therapist time. This scenario we call 'low'. The analyses were repeated for the three scenarios under the assumption that the effectiveness of the intervention would be reduced when therapist time was reduced. We will return to this issue later. Characteristics of the sample At baseline, no differences were found between the treatment (n = 58) and care-as-usual (n = 59) conditions with regard to gender (71% female), age (mean 43.16 years, s.d. = 13.05), years of education (mean 13.72, s.d. = 3.09), partner status (75% living with a partner) and employment status (66% had a job). The mean PDSS-SR panic and agoraphobic symptom severity level in the experimental group was 6.26 (s.d. = 4.00); in the control group this was 6.01 (s.d. = 3.76), which was not statistically different (mean difference = 0.25, s.e = 0.72, t = 0.34, P = 0.732). It was also checked whether randomisation had resulted in a balanced distribution of co-morbid depression, agoraphobia and social phobia across the treatment conditions, and that this was indeed the case (χ 2 (1) = 0.000, P = 1.00; χ 2 (1) = 0.031, P = 0.86; χ 2 (1) = 0.006, P = 0.94, respectively). Finally, age of onset of panic disorder symptoms was also evenly distributed across the conditions (mean age of 30.6 and 31.3 years in the control and treatment conditions respectively; t = 0.34; P = 0.738). Adherence to the intervention After each session the therapists registered the attendance of the participants and ascertained whether they had done their homework. On the basis of this information, 2 (3%) of the 59 subjects randomized to the experimental group did not start the course. Reasons for not starting the course were either starting another course or lack of time because of work. Forty-six (78%) subjects of the experimental group completed the course (completing the course is defined as attending at least 6 sessions). The main reasons for not completing the course were of a practical nature (e.g., work, illness). The mean number of attended sessions was 6.4 sessions. Of the attending participants, 77% had completed their homework for each session, 21% did not complete their homework for 1 session, and 2% for 4 sessions. Concurrent medication use There was no significant difference in the use of medication between the groups at baseline. In the experimental group 16 (27%) participants used medication at baseline, 1 (2%) started medication during the course and 3 (5%) stopped using medication. In the control group 28 (48%) participants used medication at baseline, 2 (3%) started and 2 (3%) stopped medication in the period between baseline and T1. Therefore, it is unlikely that the present findings can be explained by changes in medication use. Incremental effectiveness After the intervention, at t1, the intervention group had 51 people not meeting the DSM-IV/MINI criteria for PD. The probability of PD-free survival over three months was therefore 51/59 = 0.86. In the care-as-usual group the probability of PD-free survival was 0.74 (43/58). The incremental effectiveness was calculated as the difference between the probabilities of a beneficial outcome in each of the conditions, i.e. 0.86-0.74 = 0.12. The incremental effectiveness has been defined as the clinical parameter of interest in the remainder of this study. Its inverse equalled the number needed to be treated, NNT, as 1/0.12 = 8.3, indicating that somewhat more than 8 people with subclinical symptom levels have to become recipients of the intervention (rather then care-as-usual) to generate a health gain of one additional PD-free survival over three months in one of them. Table 2 gives an overview of the monthly per capita costs (means, s.d.) in each of the conditions at t0 and t1. The base-line costs are presented in the column labelled t0. As can be seen, the total costs in the care-as-usual group (mean €346) are higher than in the treatment group (mean €222). This indicates that randomisation failed to produce evenly distributed costs across the conditions at t0. Closer inspection reveals that this difference is caused by the higher costs associated with production losses in the care-as-usual group. In the light of the difference between the conditions at baseline, we decided to calculate the pre-post differences in the costs in each of the conditions, because it would be wrong to solely focus on the costs at t1 and to ignore the 'false start' at t0. The pre-post difference of the total costs in the treatment group is €222 -€255 = -€33. Likewise the pre-post difference in the care-as-usual group is €346 -€426 = -€80. Finally, the incremental costs are calculated as the difference between the conditions, hence €80 -€33 = €47, indicating that the intervention is associated with somewhat higher monthly per capita costs than care-as-usual, but this difference is hardly appreciable from an economic point of view. Incremental costs It is worth noting that the costs in the treatment condition were further reduced at the extended six month follow-up (t2). These costs were €222 at t0, became €255 at t1, and then became €197 (s.d. = 351) at t2. So far we have ignored the intervention costs. These are €750 per recipient. These costs dominate the incremental costs of €47 at t1. Therefore, the intervention costs must be seen as the single most important cost driver. When the intervention costs are included in the above calculations then the incremental costs at t1 are €797, €422 and €237 for each of the (hypothetical) intensity levels. Incremental cost-effectiveness Substitution of the incremental costs (€797) and the incremental effects (0.12) in the formula for the incremental costs effectiveness ratio, indicates that the corresponding mean ICER is €6,642 per three months PD-free survival time for the factual condition. The arrhythmic mean may not provide the best estimate for the ICER. Therefore we also present the median ICER, which is based on the bootstrap distribution of 2,500 simulated ICERs. The median ICER is €6,198, which has a 95% confidence interval of 2,435 -60,731, indicating a large amount of stochastic uncertainty. Sensitivity analysis The above analyses were repeated for the hypothetical scenarios where therapist time is reduced. Reducing therapist time is associated with lower incremental costs of €797, €422 and €237, respectively for the high, medium and low intensity levels. The literature shows that reducing therapist time in delivering cognitive-behavioural therapy (CBT) for panic disorder has only a limited impact on effectiveness. To illustrate, Kenardy and colleagues [12] found that 12 sessions of CBT were only somewhat more effective than 6 sessions. Moreover, they found that 6 sessions of CBT augmented with computer-aided CBT was almost as good as 12 CBT sessions under the guidance of a therapist. This appears an interesting option for reducing therapist's time, while not sacrificing too much by way of the intervention's efficacy. Nevertheless, we prefer to make conservative assumptions and assume that the observed incremental effect of 0.12 will be reduced to 0.09 and 0.06 when therapist involvement is reduced. This results in mean ICERs of €6,642, €4,689 and €3,950 per three months PD-free survival time for the factual condition and the two hypothetical scenarios. Table 3 presents the median ICERs and their confidence intervals. The upper limit of the scenario where a single therapist is only involved in one hour per session could not be calculated, and was replaced by an infinity sign. Uncertainty and acceptability A way to obtain an understanding of the uncertainty is given in Figure 2. This figure presents a scatter of 2,500 simulated bootstrap ICERs on the cost-effectiveness plane for each of the intensity levels (left-hand panel). The axes divide the cost-effectiveness plane into four quadrants. The vast majority (>87%) of the simulated ICERs fall into the north-east quadrant for each of the three intensity lev-els, indicating that better health is produced at additional costs by the intervention relative to the comparator condition. This is true for all three scenarios. It is also shown that the scatter shifts to the south when we move from the high to the low intensity level of therapist involvement. This indicates that the intervention gets progressively less costly in producing health gains when therapists' time is reduced. At the same time it is shown that the scatters move slightly to the west, indicating that effectiveness also diminishes when intensity levels are reduced. The degree of uncertainty precludes statistically significant findings, but one way to tackle this problem is to adopt a probabilistic approach. That is, to present an ICER acceptability curve, which gives the probability (in %) that the treatment is more acceptable from a cost-effectiveness point of view than its alternative, given various ceilings for the willingness to pay (WTP) for a PD-free survival of three months. To illustrate, when the WTP ceiling for a PD-free period is €10,000, then the intervention has a probability of 75.2% of being more acceptable than careas-usual under factual conditions characterised by high levels of therapist involvement. For the hypothetical medium and low levels of therapist involvement, this probability changes to 82.4% and 75.4%, respectively. As can be seen, under the WTP ceiling of €10,000 the acceptability depends greatly on the willingness to pay: the curve is steep at the lower end of WTP ceilings, but beyond € 10,000 it becomes flat and is then fairly insensitive to WTP levels. This indicates that a decision-maker's choice at WTP € 10,000 and over is not surrounded by much uncertainty, even when the ICERs have broad confidence intervals. Below the €10,000 threshold the conclusion about the relative cost-effectiveness of the intervention depends on the precise WTP ceiling. It also depends on the intensity level of therapist involvement. Discussion This study was conducted to assess the potential of a timelimited cognitive behavioural intervention for preventing onsets in panic disorder in a cost-effective way. From a larger trial the group at risk of becoming PD cases (n = 117) was selected. Of these 59 participated in the Don't Panic course, while 58 participants were randomised to a waiting list with unrestricted access to care-as-usual. The cost-effectiveness analysis was conducted from the soci-etal perspective, thus including the direct medical, direct non-medical and indirect costs. The central clinical outcome was MINI/DSM-IV panic disorder (PD) free survival time over three months. Therapists' involvement was the single most important cost driver, and three intensity levels (high, medium and low) were evaluated, with the high level corresponding to the factual situation, and the medium and low levels being presented as hypothetical scenarios. Distribution of bootstrapped ICERs (n = 2,500) in the cost-effectiveness plane and ICER acceptability curve for each of the three intensity levels of therapists' involvement Main findings The data suggest that the Don't Panic course is more successful in preventing new onsets of DSM-IV panic disorder in people with sub-clinical symptoms than a waitlist control condition with unrestricted access to care-as-usual. The success rates are 86% in the treatment condition versus 74% in the care-as-usual condition, yielding a difference of 12% in favour of the intervention. Not only is the incidence of a disabling disorder reduced, but this is achieved on average for €6,642 (median €6,198) per PD-free survival over three months. It can be concluded that there is a probability larger than 75% that the intervention is more acceptable from a cost-effectiveness point of view than care-as-usual when the willingness to pay (WTP) for a PD-free survival time of three months has a ceiling of €10,000. Beyond that ceiling, the likelihood that the intervention must be regarded as acceptable from the health economic point of view remains high and is not surrounded by doubt. However, below the WTP ceiling of €10,000 the acceptability of the intervention depends crucially on the WTP and is also sensitive to the level of therapist involvement. In fact, sensitivity analyses showed that the amount of therapist involvement in the intervention is an important cost driver. It is also a parameter that is under (partial) control of the management of mental health service offering the intervention. The costeffectiveness of the intervention is much improved when therapist involvement is reduced by a factor 2 and 4. The median ICERs then become €3,792 and €2,511, even when taking into account that reduced therapist involvement may be associated with lesser efficacy of the intervention. Limitations The findings of this study have to be placed in the context of its limitations. First, the study was part of a larger trial and was not powered to detect statistically significant differences in outcomes and costs. Therefore we refrained from hypothesis testing. Instead, we took a probabilistic appraoch indicating the likelihood (in %) that the intervention is superior to care-as-usual from a health economic point of view. Second, the time horizon of the trial was limited to only three months. This period is too short to draw convincing conclusions about preventing a disorder with an often chronic and recurrent course. Moreover, the MINI/DSM-IV PD status at t1 refers to the last month, but we interpreted it as an indicator of PD-free survival over the last three months. Hence, we may have missed an episode of PD between t0 and t1. However, it is worth mentioning that the extended follow-up in the treatment arm of the trial showed that effects are maintained for at least six months over a range of clinical outcomes (data not shown). This strengthens the impression that effects induced by the intervention persist over time, making it less likely that some people who were PD free at t1 had experienced PD between t0 and t1. Third, the baseline costs were somewhat higher in the control group as compared with the intervention group. This difference of €124 was not significant (s.e. = 77.87; t = -1.59; P = 0.131). Nevertheless, we took care of this baseline difference in our analysis by subtracting the baseline difference from the costs at follow-up. Fourth, in a sensitivity analysis the actual situation was compared with two hypothetical scenarios. This indicated how reducing therapist involvement would help to lower the costs of the intervention and thus improve its costeffectiveness. Reducing therapist time could be achieved, for example, by delegating certain tasks from the therapist to a computer and offerering adjunctive computer-aided CBT for panic disorder [12,13]. However, in these scenarios we had to make assumptions of how less therapist involvement would impact on efficacy. To this end we reduced the incremental effectiveness from the observed 0.12 to hypothetical values 0.09 and 0.06, but these values are somewhat arbitrary, and the trade-offs between costs and effects may have been different from what was modelled. It is worth mentioning, however, that we made a conscious decision to make conservative assumptions about the effect of lesser therapist time on the treatment response -such that the null-hypothesis of no effect was strengthened. In light of these limitations our findings should be interpreted with some caution. Conclusion This is the first economic evaluation of a preventive intervention in panic disorder. The outcomes are encouraging, but are based on a small trial with a short follow-up. Therefore, our principal conclusion is that our findings must await replication in a larger trial with a longer follow-up before confident recommendations can be made with regard to the broader implementation of the intervention. It is our recommendation that such a trial should be conducted in the near future. After all, panic disorder is a crippling condition associated with reduced quality of life and has formidable economic ramifications for patients, the health care system, and society as a whole. To curb the massive annual influx of new cases of panic disorder, to maintain the quality of life in many, and perhaps to avoid some of the costs associated with the full-blown disorder we need a cost-effective preventive intervention for panic disorder. Our data suggest that the Don't Panic course is likely to be one such candidate, especially when the intervention is offered by one qualified therapist. Still, a larger replication study is needed to further test this proposition. Under the current conditions, the Don't Panic course is perhaps best offered as an economically affordable first step in a stepped-care approach for PD, thus allowing people to step up to more intensive forms of treatment, should that be required.
2014-10-01T00:00:00.000Z
2009-04-24T00:00:00.000
{ "year": 2009, "sha1": "3b357a49e191c9c87d37683b2f23493eaa6f9566", "oa_license": "CCBY", "oa_url": "https://resource-allocation.biomedcentral.com/track/pdf/10.1186/1478-7547-7-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b357a49e191c9c87d37683b2f23493eaa6f9566", "s2fieldsofstudy": [ "Economics", "Medicine", "Psychology" ], "extfieldsofstudy": [] }
80930538
pes2o/s2orc
v3-fos-license
Hypertriglyceridemia and adverse outcomes during pregnancy Corresponding author: Jonathan Cortés-Vásquez. Lipids and Diabetes Unit, Departament of Physiological Sciences, Faculty of Medicine, Universidad Nacional de Colombia. Carrera 30 No. 45-03, building 471, floor 4. Telephone number: +57 1 3165000, ext.: 15054; Mobile number: +57 3045467063. Bogotá D.C. Colombia. Email: joacortesva@unal.edu.co. DOI: http://dx.doi.org/10.15446/revfacmed.v66n2.60791 REVIEW ARTICLE Introduction During pregnancy, mother's physiology adapts to provide nutrients to the growing fetus.However, the imbalance in the amount of triglycerides (TG) -either before or during pregnancy-has been related to maternal-perinatal pathologies such as preeclampsia (PE) and gestational diabetes mellitus (GDM).In neonates, reported pathologies include preterm delivery (PD), dystocia, macrosomia, hypoglycemia or intrauterine growth restriction.(1)(2)(3)(4)(5)(6) In Colombia, 7 482 cases of severe maternal morbidity (SMM) were reported during the first semester of 2016, which corresponds to 27.9 mothers for every 1 000 live births (LB), and 182 cases of maternal mortality, with 49.2 cases for every 100 000 LB. (7) The main causes of SMM and maternal mortality were hypertensive disorders (62.4% and 15.5%, respectively) and hemorrhagic complications (15.5% and 18%, respectively).(7) Exacerbated hypertriglyceridemia and insulin resistance condition an oxidative environment that leads to endothelial injury, which, in turn, has a predisposing effect on the development of PE as a hypertensive disorder of higher incidence.(8)(9)(10) Intrauterine weight gain may be associated with the elevation of TG passage through the placenta and TG production by the fetus, which would lead to a large for gestational age (LGA) fetus and, thus, possible hemorrhagic complications during delivery by overdistension and uterine rupture.(6,11,12) Besides short-term complications associated with hypertriglyceridemia, other long-term complications involve metabolic disorders and cardiovascular diseases (CVD) that may affect the well-being of the child during adulthood and the health of the mother in her middle and late adulthood.(13)(14)(15)(16) The objective of this work is to perform a comprehensive literature review on the pathophysiological causes, effects on mother and child, expected TG values in each trimester of pregnancy and possible therapies to manage maternal hypertriglyceridemia in a timely manner. Increase in circulating triglycerides during pregnancy Pregnancy is a state of metabolic stress associated with high TG levels (17), which increase during this period; the highest concentrations are observed during the third trimester.(1) This increase is related to the decrease in the synthesis of fatty acids and the activity of the lipoprotein lipase (LPL) that catalyzes the hydrolysis of TG-rich lipoproteins in the adipose tissue.(1) The activity of this enzyme decreases about 85% during a normal pregnancy.(2,17) TG levels decrease in the postpartum period and this decrease is faster in women who lactate.(18) The abovementioned events are related to the insulin resistance that occurs during pregnancy, which may be caused by the increase of non-esterified fatty acids, changes in adipokines secretion and inflammatory factors.(1,2,17) Increased lipolysis has been associated with increased placental lactogen, progesterone, prolactin, cortisol and estrogen.(2,19) Adiponectin and apelin, which favor insulin sensitivity, decrease in the third trimester, while other adipokines and cytokines that reduce insulin sensitivity increase at the end of pregnancy, including resistin, retinol binding protein 4 (RBP4), leptin, visfatin, chemerin, adipocyte fatty acid binding protein (AFABP), tumor necrosis factor alpha (TNF-α) and interleukin-6 (IL-6) (2).In addition, the expression of peroxisome proliferatoractivated receptor gamma (PPAR-γ) in adipose tissue decreases in the third trimester, contributing to insulin resistance.(2) Placental passage of maternal triglycerides 1-3% of maternal fatty acids circulate in non-esterified form and enter the syncytiotrophoblast through diffusion or receptor-mediated endocytosis.(3) Low-density lipoproteins associated with TG (LDL-TG), more abundant in circulation, are hydrolyzed by intracellular lipases and cholesterol ester hydrolases.On the other hand, high density lipoproteins (HDL) and very low density lipoproteins (VLDL) bind to surface receptors and are hydrolysed extracellularly by endothelial lipase.(2,3,16) Transporters such as fatty acid transport protein 1 and 4 (FATP-1 and FATP-4), fatty acids translocase (FAT/ CD36) and plasma membrane fatty acid-binding protein (FABPpm) make fatty acid uptake a more efficient process.(3,16) At the intracellular level, the fatty acids bind to the fatty acid binding protein (FABP), which has intrinsic acetyl-CoA ligase activity.(16) Then, the syncytiotrophoblast releases fatty acids into the fetal circulation to bind to the α-fetoprotein (AFP) that takes them to the fetal liver where they are metabolized (Figure 1).(2,12,16) It should be noted that lipid droplets have been observed in the placenta and that their formation is stimulated by adipophilin.(16) White adipose tissue is observed in the fetus since week 14 or 15, while fetal lipogenesis begins at week 12 or 20; PPAR-γ activation plays an important role in the subsequent increase of adipose tissue size.(20) Fetus weight increases fourfold from 1.6 g/kg/day to 3.4 g/ kg/day since the 26th week of pregnancy, but the increase in fetal body fat depends by 20% on placental lipid transfer; the remaining tissue results from fetal lipogenesis.It is worth noting that only one third of circulating maternal glucose is used by placenta through lipolytic routes and the demand is not modified by increasing glucose levels because the utilization is saturated between 90 mg/dL and 143 mg/dL.(2) Hypertriglyceridemia and pathologies A study carried out in Amsterdam (n=4 008) revealed that TG increase in the first trimester of pregnancy is directly associated with pregnancyinduced hypertension, PD and LGA.(3) Likewise, a research carried out in India (n=180) found that high TG levels (≥195 mg/dL) in the second trimester are associated with a higher incidence of PD, GDM, PE and LGA.(4) Another complication is maternal pancreatitis, which must be mentioned due to its severity (Table 1).(14,18) LGA: large for gestational age fetus.Source: Own elaboration based on the data obtained in the study. In the mother Preeclampsia Between 2% and 8% of pregnancies are complicated by PE, which is the third cause of maternal-fetal death after hemorrhage and sepsis.(14,(23)(24)(25) In the mother, this condition is associated with endothelial dysfunction, hypertension, kidney disease and diabetes mellitus (DM).It also increases the risk of developing CVD and death due to kidney and liver impairment.( 14) Hypertriglyceridemia, before or during pregnancy, alters the vascular development of the placenta and results from inadequate implantation or placental perfusion.(3,5,8) PE and dyslipidemia are correlated because LPL dysfunction, metabolic syndrome and increased plasma lipids occur in both.(4) These disorders stimulate peroxidation of placental lipids and trophoblast components that promote oxidative stress and form deleterious complexes in endothelial cells that cause vascular dysfunction.(4,5) A prospective cohort study conducted in Egypt (n=251) showed that TG levels between weeks 4 and 12 of pregnancy can be predictors of PE development.(5,8) This study found that the increase in total cholesterol (TC), TG and LDL greater than 231 mg/dL, 149.5 mg/ dL and 161 mg/dL, respectively, and the decrease in HDL below 42.5 mg/dL are cutoff points with positive predictive value for the development of PE, while TC and TG increase was related to severity. (5) Another study conducted in Turkey (n=52) found that TC, TG and LDL increase greater than 4%, 5% and 9.8%, respectively, and a decrease in HDL by more than 9% are associated with worse prognosis of gestational hypertensive disease.(26) The study by Manna et al. (27) in Bangladesh (n=90) showed that increased TG levels are directly associated with an increase in blood pressure.In the study group, systolic and diastolic pressures were 152.4±19.8mmHg and 103.1±12.2mmHg, respectively, while in pregnant controls, they were 112.0±8.9 mmHg and 75.5±6.6mmHg, respectively.( 27) These numbers were associated with TG levels in the study group (242.9±36.8mg/dL) and in the control group (184.6±12.5 mg/dL).( 27) TG >181 mg/dL before week 20 increased the risk of PE by 3 to 7 times.(28) A study conducted in Mexico with 47 normotensive women and 27 with PE or gestational hypertension in the seventh month of pregnancy showed that hypertriglyceridemia is related to hypertensive disorders in pregnancy.Researchers found that nitric oxide (NO) synthesis decreased proportionally to the increase of TG levels.( 8) NO decrease may be secondary to oxidative stress increase considering the high concentration of TG or glucose that inhibits NO synthesis.(29) In this study, women with hypertensive disorders of pregnancy were treated with hydralazine, which induces NO synthesis.(8) It has been observed that the increase of fatty acids in the placenta of women with DM1 is related to reduced fetal-placental circulation and that having a family history of DM2 is closely related to an increased risk of developing hypertensive disorders during pregnancy.(2) Gestational diabetes mellitus The prevalence of GDM has gone from 4% to 20% in 27 years and its current incidence is 1-14%.(30,31) This pathology not only increases the risk of PE and macrosomia during pregnancy, but also predisposes the mother to the development of DM2 and CVD. ( 12 Alterations have been found in the fetus and placenta related to imbalances in the expression and function of PPARs isotypes that increase lipid flow in women with hyperlipidemia and GDM.(2,9,10) Concentrations of PPAR-γ and PPAR-α are low in term-placental tissue of women who developed GDM, whereas no changes are detected in PPAR-δ concentrations.(9,10) No changes were found in PPAR-γ concentrations in women with DM1; however, decreased levels of 15-deoxy-δ-12, 14-prostaglandin J2 (15dPGJ2) were observed.( 9) Also, alterations in the expression of PPAR-α during the first trimester of pregnancy are associated with spontaneous abortions.(10) 15dPGJ2 increases lipid concentrations and significantly reduces NO expression in the placenta of healthy women, while it regulates the increase in the concentrations of phospholipids and cholesterol esters in the placenta of diabetic women.(9) In addition, its receptor (PPAR-γ) is decreased in these patients, which could increase NO and lipid peroxidation, markers of pro-inflammatory and pro-oxidant states.( 9) A case-control study (n=254) conducted in the USA by Han et al. (32) found that the measurement of the pregestational lipid profile is a predictor of GDM.This study revealed that women who developed GDM, in comparison with the controls, presented smaller LDL diameters, low concentrations of HDL and high levels of small VLDL regardless of other risk factors (body mass index -BMI-, weight gain during pregnancy, age or ethnicity) in measurements taken even up to 7 years before pregnancy.(32,33) On the other hand, the risk of developing GDM is 3.5 times higher if TG >137 mg/dL in the first trimester.In addition, every time TG increase by 20 mg/ dL, the risk of developing DMG increases by 10%.(34) Pancreatitis Acute pancreatitis caused by gestational hypertriglyceridemia is a rare complication of pregnancy but, when it occurs, it represents high maternal and fetal morbidity and mortality (1,15,16,18) and may be complicated by pancreatic necrosis, shock, hypokalemia, PE or eclampsia.(15) The risk of developing pancreatitis increases progressively with TG >500 mg/dL.(15) Thus, during the first trimester, these increased levels are associated with a 19% risk of pancreatitis, in the second with 26%, in the third with 53% and in the postpartum with 2% (1,15), depending on the activity of pancreatic lipase and liver involvement.(15,18) In the fetus Macrosomia Lipids availability and accumulation in the fetus of a mother with dyslipidemia increases the risk of developing macrosomia and PD.(6) A LB presents macrosomia if its weight is >4kg and LGA if its size is above the 90th percentile.(12,14) Giving birth to a LGA fetus increases the likelihood of prolonged labor, cesarean or postpartum hemorrhage.In addition, giving birth to fetuses with macrosomia makes women 4.2 times more susceptible to developing DM2 throughout their lives.(35) Hypoglycemia, hyperbilirubinemia, respiratory distress, cardiac hypertrophy, shoulder dystocia, clavicle fracture, and brachial plexus injuries may occur in the newborn.(36,37) A Cuban case-control study (n=236) showed that the increase in TG levels during the third trimester was a predictor of the development of fetal macrosomia (OR:4.80,CI95%: 2.34-9.84).( 6) Another study conducted in Chile in patients with well-controlled GDM and hypertriglyceridemia found that macrosomia was more frequent in women with pre-pregnancy overweight or obesity.(12,38,39) A BMI >26.1 kg/m 2 was a predictor of macrosomia regardless of hypertriglyceridemia during the first trimester of pregnancy (40), perhaps due to the fact that free fatty acids act as growth factors and, that in high concentrations, they compete for the binding site of sex hormones to albumin, which increases the levels of free hormones that could act on the placenta and the fetus, thus modifying its growth and development.(3) It is noteworthy that the levels of circulating leptin in pregnant women with GDM, PE, intrauterine growth restriction or macrosomia have not been shown to have a predictive value on the weight of the newborn.(41) Contrary to the increase in TG, recent studies suggest that low levels of omega 3 fatty acids during prenatal life influence adiposity in children, not only in intrauterine life, but also in the postnatal period.(42,43) Low concentrations of omega 3 fatty acids are associated with lower weights.In turn, a study conducted in India identified that the placenta of term infants with low weight had lower levels of docosahexaenoic acid (DHA) compared to term neonates weighing > 2 500g.(43) Preterm delivery PD is the leading cause of neonatal morbidity and mortality and occurs in 12% of births.(14,44) Its risk increases to 60% if there is a history of DM1, DM2 or pregestational hyperlipidemia and to 33.3%, if there is hypertriglyceridemia in the third trimester.(44)(45)(46) Variants in LPL Polymorphisms in the LPL gene (S447X, N291S and D9N) are associated with exaggerated increase in TG levels during pregnancy.(17,18) In addition, two APOAV polymorphisms, -1131T>C and S19W, are related to increased VLDL secretion and increased circulating TG levels of 11% and 16.2%, respectively.(18) An interesting correlation has been found between APOAV (-1131T>C), maternal size and crown-rump length of the fetus.( 18) Measurement of triglycerides during pregnancy In practice, measuring the concentrations of circulating TG once every trimester is advisable, since increased TG levels are normal during pregnancy (Table 2).Clinically, xanthomas on the external surface of arms, legs and buttocks, retinal lipemia, hepatosplenomegaly and lipemic serum are very suggestive of severe hypertriglyceridemia. (15) Source: Own elaboration based on the data obtained in the study. A study conducted by Landázuri et al. (48) in Colombia (n=422) found that TG levels were 86% higher during the second trimester of pregnancy and 137.8% in the third compared to the first.The authors followed 56 of these pregnant women throughout their pregnancy and observed a TG increase of 58.8% from the first to the second trimester and of 112.6% from the first to the third trimester (p<0.001).(48) In this study, low HDL levels were observed compared to the European or North American population during pregnancy.(46,47) Low HDL serum levels could be an additional risk factor for cardiovascular or gestational diseases in Colombian mothers and their children.(51) Studies to evaluate the lipid profile of normal pregnant women have allowed proposing physiological levels for different populations.Thus, the study conducted by Ywaskewycz et al. (49) (n=291) in pregnant women without complications showed a TG increase of 56% from the first to second trimester and of 124% from the first to the third trimester; in the third trimester, the increase was twofold as compared to non-pregnant controls, and no differences were observed between TG in the first trimester and TG in non-pregnant women.(49) In addition, the TG/cHDL ratio increases during pregnancy, thus indicating the presence of LDL proteins of smaller size, rich in TG, denser and with higher atherogenic risk.(49) On the other hand, the study by Becerra et al. (50) (n=91) with healthy pregnant women found that TG levels increased significantly (p<0.001) from the first to the second and third trimesters, and that the TG/cHDL ratio was correlated to pre-pregnancy BMI, fetal abdominal circumference, estimated fetal weight and uterine height.(p<0.01) Treatment of hypertriglyceridemia during pregnancy The treatment of hypertriglyceridemia depends on how high lipids are.If moderate (200-999 mg/dL), a strict low-fat diet with nutritional support with medium-chain TG and ω-3 fatty acids is initiated while increasing physical activity.Dietary supplementation with DHA and eicosapentaenoic acid reduces the production of proinflammatory cytokines (TNF-α, IL-1, IL-6 and IL-8) and inhibits the synthesis of VLDL without altering that of HDL.(15,16,52) If hypertriglyceridemia is severe (>1000 mg/dL), using medications such as fibrates (PPAR-α agonists), statins, niacin, heparin or insulin is considered.Other possible alternatives are carbaprostacycline and iloprost, drugs that activate PPAR-δ and glitazones, which are synthetic ligands for PPAR-γ.(15,53) However, to the extent possible, using medications should be avoided during the first trimester of pregnancy, since many are contraindicated due to the potential harm to the fetus.It should be noted that using statins and fibrates has been described in case reports. In case dyslipidemia is refractory to drugs or nutrition, plasmapheresis is initiated.(54) This procedure is also indicated when serum lipase levels are >3 times the normal limit of this enzyme or when hypocalcemia, lactic acidosis and worsening of inflammation or organic dysfunction occur; this can also be combined with heparin infusion.(55,56) When TG levels are below 500 mg/dL, this procedure should be stopped (55,56); however, if contraindicated or not available, infusion of regular insulin with 5% dextrose is used and glucose levels must be maintained between 150 mg/dL and 200 mg/dL during therapy.(15,53) When TG levels are normalized, the next step is to perform a rigorous control of the lipid profile, long-term dietary restriction and administration of fenofibrate.(15,16) In severe hypertriglyceridemia, PD is induced considering the high risks of maternal and fetal mortality (20% and 50%, respectively) and secondary pancreatitis.(1,15,16) A study conducted by Kern-Pessoa et al. (57) in Sao Paulo (n=73) found that TC and TG levels increased during pregnancy and decreased from the third to the sixth postpartum week in women with GDM.(57) In this study, LDL and TG levels increased during pregnancy in patients who received insulin, and decreased in those treated with a strict low-fat diet, rich in ω-3 fatty acids as therapy.( 57) Conclusions During pregnancy, TG concentrations increase as a physiological adaptation mechanism, but if they reach very high levels, they become a risk factor for the mother and the child in the short and long term.The metabolic stress observed during this stage is associated with a decrease in the synthesis of fatty acids and the activity of lipoprotein lipase that elevates non-esterified fatty acids and increases insulin resistance.Greater lipolysis is associated with the increase of hormones (progesterone, prolactin and estrogens), cytokines (TNF-α and IL-6) and adipokines (leptin, visfatin and resistin), and with the decrease in adiponectin and apelin, which together reduce insulin sensitivity.For this reason, the availability of TG bound to lipoproteins such as VLDL increases for transplacental passage of fatty acids to the fetus through diffusion, hydrolysis and membrane transporters.These fatty acids reach the fetal liver attached to the α-fetoprotein. Hypertriglyceridemia increases the risk of pregnancy complications, especially in pregnant women with a history of obesity or overweight, uncontrolled diabetes and familial dyslipidemia.In pregnant women, GDM and acute pancreatitis are the main complications of hypertriglyceridemia; on the other hand, PD, macrosomia and LGA fetuses are the main complications for the product of pregnancy. For treatment, some case reports have described the use of statins and fibrates in pregnant women with severe hypertriglyceridemia; however, the use of these medications could generate more risks than benefits for the fetus, so its formulation is not recommended during pregnancy. The limited number of studies and the great variability of the data indicate the need to conduct more research in Colombia to establish the normal ranges of TG during the three trimesters of pregnancy.This could facilitate diagnosis and monitoring of hypertriglyceridemia throughout pregnancy.On the other hand, it is important that health professionals understand the importance of measuring TG levels before pregnancy to determine risks and effective interventions and reduce maternal and child morbidity and mortality. Table 1 . Outcomes of gestational hypertriglyceridemia according to different studies. Table 2 . Normal triglycerides levels during pregnancy.
2019-03-18T13:59:28.069Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "296ca6cc528cb6393eeba4228507e88473585620", "oa_license": "CCBY", "oa_url": "https://doi.org/10.15446/revfacmed.v66n2.60791", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "296ca6cc528cb6393eeba4228507e88473585620", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257550507
pes2o/s2orc
v3-fos-license
The effect of transcutaneous electrical stimulation of the submental area on the cardiorespiratory response in normal and awake subjects Background: Electrical stimulation has recently been introduced to treat patients with Obstructive sleep apnoea There are, however, few data on the effects of transcutaneous submental electrical stimulation (TES) on the cardiovascular system. We studied the effect of TES on cardiorespiratory variables in healthy volunteers during head-down-tilt (HDT) induced baroreceptor loading. Method: Cardiorespiratory parameters (blood pressure, heart rate, respiratory rate, tidal volume, airflow/minute ventilation, oxygen saturation, and end-tidal CO2/O2 concentration) were recorded seated, supine, and during head-down-tilt (50) under normoxic, hypercapnic (FiCO2 5%) and poikilocapnic hypoxic (FiO2 12%) conditions. Blood pressure (BP) was measured non-invasively and continuously (Finapres). Gas conditions were applied in random order. All participants were studied twice on different days, once without and once with TES. Results: We studied 13 healthy subjects (age 29 (12) years, six female, body mass index (BMI) 23.23 (1.6) kg·m−2). A three-way ANOVA indicated that BP decreased significantly with TES (systolic: p = 4.93E-06, diastolic: p = 3.48E-09, mean: p = 3.88E-08). Change in gas condition (systolic: p = 0.0402, diastolic: p = 0.0033, mean: p = 0.0034) and different postures (systolic: 8.49E-08, diastolic: p = 6.91E-04, mean: p = 5.47E-05) similarly impacted on BP control. When tested for interaction, there were no significant associations between the three different factors electrical stimulation, gas condition, or posture, except for an effect on minute ventilation (gas condition/posture p = 0.0369). Conclusion: Transcutaneous electrical stimulation has a substantial impact on the blood pressure. Similarly, postural changes and variations in inspired gas impact on blood pressure control. Finally, there was an interaction between posture and inspired gases that affects minute ventilation. These observations have implications on our understanding of integrated cardiorespiratory control, and may prove beneficial for patients with SDB who are assessed for treatment with electrical stimulation. Introduction Obstructive sleep apnoea (OSA) is a highly prevalent condition that affects about one billion people worldwide (Benjafield et al., 2019). In patients with OSA, intermittent and repeated upper airway collapse during sleep results in irregular breathing at night. Nocturnal apnoeas and hypopnoeas lead to an altered drive to breathe, high work of breathing, oxygen desaturations, and arousals from sleep (Hilton et al., 2001). These effects can cause daytime symptoms, such as sleepiness, and are associated with increased sympathetic tone activation and elevated blood pressure (Remmers et al., 1978). OSA is associated with comorbidities, including hypertension (Marin et al., 2005;Parati et al., 2014), ischaemic heart disease (Martinez et al., 2012), stroke (Palomäki et al., 1989), congestive heart failure (Bradley et al., 1985), obesity and metabolic syndrome (Levy et al., 2009), and diabetes (Punjabi et al., 2002). Treatment of OSA includes continuous positive airway pressure (CPAP), and mandibular advancement devices (MAD) (National Institute for Care Excellence, 2021). Primary airway therapies aim to maintain upper airway patency during sleep and lead to a normalisation of the work of breathing and prevention of apnoeas, hypopnoeas, and arousals from sleep that could cause the sympathetic response. Long-term therapy of OSA improves daytime symptoms and, potentially, long-term cardiovascular risks (Somers et al., 2008). CPAP therapy remains the most common treatment for moderate-to-severe OSA, while for milder cases of OSA, MADs can also be effective (NICE (National Institute for Health and Care Excellence), 2008). However, long-term adherence to CPAP therapy is limited, with only 70% adherence at 3-month (Benjafield et al., 2019b) and further reductions at later follow up (Benjafield et al., 2021). Non-CPAP therapies provide alternatives for patients who have difficulties with long-term compliance to CPAP (Randerath et al., 2021) and may be preferred over conventional treatment (Campbell et al., 2015). Recently, electrical stimulation invasively applied using hypoglossal nerve stimulation (HNS) (Strollo et al., 2014) or transcutaneous electrical stimulation (TESLA) in the submental area to target the upper airway dilator muscles, particularly the genioglossus muscle, has been developed to treat OSA (Pengo et al., 2016). The randomised controlled trial using HNS (STAR-trial) reported modest improvements in the diastolic blood pressure with no significant changes in systolic blood pressure or heart rate over a 1-year period (Strollo et al., 2014). However, data on the acute cardiorespiratory responses to transcutaneous electrical stimulation of the upper airway dilator muscles, in both health and disease remain sparse . This is a study to consider the physiological response to the use of electrical stimulation in direct proximity to the carotides, as hypoglossal nerve stimulation (HNS) and transcutaneous electrical stimulation is nowadays being used to treat obstructive sleep apnoea; the purpose of this study was to test the effects of the current has on the cardiorespiratory system in a cohort of normal subject (Strollo et al., 2014). We hypothesize that acute application of transcutaneous electrical stimulation of the submental area will influence cardiovascular control in healthy, awake subjects. In the current study, we sought to describe the effect of transcutaneous electrical stimulation of the submental area on the cardiorespiratory control, for this particular purpose, we recorded beat-by-beat blood pressure with other cardiopulmonary variables when exposed to room air, hypoxic and hypercapnic gas mixtures (chemosensitivity) while in seated and supine postures, as well as with 50°HDT (baroreceptor response) while using electrical stimulation of the submental area (TES). Methods and subjects The study was approved by the local research ethics committee (King's College London; RESCM-20/21-8487) and performed in accordance with the Declaration of Helsinki. All participants received an information sheet and provided informed and written consent prior to participation. Subjects We included healthy, normal-and slightly overweight subjects of both sexes over 16 years of age. All participants were non-smokers and free of cardiorespiratory and other significant acute or chronic illness and had normal blood pressure. Participants visited the respiratory physiology laboratory on two occasions at least 1 week apart, with one visit acting as control without electrical stimulation and the other during which TES was used during in all postures and gas conditions. Inclusion criteria Subjects for the study met all the following criteria: age >16 years, body-mass index (BMI) > 18.5 and <30 kg/m 2 , non-smoker, and clinically stable in the last 28 days. Exclusion criteria Subjects were excluded from the study if any of the following conditions were met: history of cardiovascular, respiratory, or neuromuscular disease, cardiac pacemaker, active seizures, current smokers, acute illness, allergy to skin patches, oobesity (BMI>30 kg/m 2 ) or cachexia (BMI<18.5 kg/m 2 ), and vertigo. Primary and secondary outcomes The primary outcome of the study was the change in the diastolic blood pressure (BP) with electrical stimulation, affecting Frontiers in Physiology frontiersin.org 02 baro-and chemoreceptor response. Secondary outcomes were changes in other cardiovascular (systolic/mean BP, heart rate) and respiratory variables (respiratory rate, tidal volume, minute ventilation, modified Borg scale) during electrical stimulation. Equipment Following the baseline visit without electrical stimulation the participants were continuously stimulated using electrical current in the submental area (4 × 4 cm dermal patches; Med-Fit Plus Ltd., Stockport, United Kingdom), at a frequency of 30 Hz and a pulse width of 250 microseconds during the second visit. Intensity of the electrical current was titrated according to individual comfort using a TENS machine (Premier Combo Plus, the TENS + Company Lets, Stockport, United Kingdom, placed in the submental area midway between angle of mandible and the chin (Figure 1) as previously described elsewhere (Steier et al., 2011). Continuous, beat-by-beat arterial blood pressure was measured continuously using digital artery photoplethysmography (Finapres, Ohmeda 2,300, BOC Healthcare, Englewood CO, United States of America). Heart rate was measured from the electrocardiogram (ECG) with electrodes positioned in the lead II configuration (ML132 bioamplifier, ADInstruments, Oxford, United Kingdom). Respiratory flow was measured via a mouthpiece, with the subject wearing a noseclip using a pneumotachograph (4,800 series, Hans Rudolph Inc., Shawnee Kansas, United States of America) and associated differential pressure transducer (Spirometer, ADInstruments, Oxford, United Kingdom). The distal end of the pneumotachograph was attached to a two way non-rebreathing valve (2,700 series, Hans Rudolph Inc., Shawnee, Kansas, United States of America, deadspace 77 ml) with inspired and expired gases measured continuously using a gas analyser (ML, 206, ADInstruments, Oxford, United Kingdom), connected to a side port on the pneumotachograph via a fine-bore catheter. Blood oxygen saturation (SpO2) was measured using a pulse oximeter (Sat 805 pulse oximeter, Charter Kontron, United Kingdom) attached to the subject's finger. All data were acquired (PowerLab 16, ADInstruments, Oxford, United Kingdom) with 1 Khz sampling and displayed (LabChart ver 8, ADInstruments, Oxford, United Kingdom). Tidal volume was obtained by digital integration of flow by the acquisition software. An open circuit ( Figure 2) was used to deliver a continuous supply of medical air (wall outlet) to the inspiratory port of the twoway non-rebreathing valve via a low volume (2.5 L) reservoir bag. The inspired gas could be enriched with 100% nitrogen or 100% carbon dioxide from cylinders (BOC, Guildford, United Kingdom) to provide the appropriate inspired gas concentration. Three inspired gas mixtures were used; medical air (21% O 2 , balance N 2 ), poikilokapnic hypoxia (12% O2, balance N 2 ) and normoxic hypercapnia (5% CO 2 , balance N 2 ). Symptoms of breathlessness were scored using the modified Borg scale in each posture (seated, supine, and 50°HDT). An electrically operated tilt table (Plinth2000 Ltd., Stowmarket, United Kingdom) which could be adjusted from 0°(flat) to 50°HDT was used to change posture. Short protocol The following parameters were recorded at baseline: Demographic data (date of birth, height, weight, body mass index, ethnicity, and gender), clinical history, and medications. The neck, hips, and waist were measured along with vital signs (heart rate and blood pressure). Measurements were first recorded in the seated position with the subject exposed to 5 min of each gas mixture, randomly assigned, before moving to the tilt table with measurements commencing in the supine position. Subjects were secured to the tilt table using a foam mattress and a foot strap across the ankles. Participants were familiarized with the 50°HDT procedure prior to the experiment commencing. After an initial period of stabilisation (at least 5 min) in the supine position, a period of 5 min resting breathing was recorded. The table was then tilted to the 50°HDT for 10 min. At the end of the 50°HDT, the subject was returned to the supine position for a further 5 min. Spontaneous ventilation and end-tidal gases (EtO 2 , EtCO 2 ), and oxygen saturations were recorded throughout. The tilt table procedure was repeated three times with the subject breathing in random order ( Figure 3). Participants were blinded to the identity of the gas being administered. For safety, the stop criterion for the hypoxic gas mixture was achieved if the arterial oxygen saturation (SpO 2 ) dropped below 80%. To account for equilibration for change in posture and different gas mixtures, the final 2 min of recording for each posture and each gas mixture were analysed and an average reported for each variable. Data processing All data were recorded in real-time using LabChart software (Chart V8, ADInstruments, Dunedin, New Zealand) with an analog-to-digital conversion at a sampling of 1 kHz. Data were FIGURE 1 Placed in the submental area midway between angle of mandible and the chin as described previously (Steier et al., 2011). Frontiers in Physiology frontiersin.org 03 exported and assigned key time periods for further analysis. Each variable was averaged over the last 2 min in seated, supine, and HDT positions. Respiratory variables (tidal volume (Vt) and respiratory rate (RR)) were extracted and multiplied to calculate minute ventilation (VE). Systolic (SBP), diastolic (DBP), and pulse blood pressure (pBP) was computed as pBP = SBP-DBP, and mean arterial pressure (MAP) was calculated as MAP = 1/3 SBP + 2/3 DBP. Heart Rate was derived from the 3-lead ECG, and SpO 2 from the pulse oximeter. Sample size calculation Based on the sample size of 13 subjects, the study detected a treatment difference at a two-sided significance level of 0.025 if the true mean difference in diastolic blood pressure (electrical stimulation on vs. off) was at least 8.049 mmHg (SD 10.6) with 80% power. The variable calculated was the minimal detectable difference in mean diastolic blood pressure, based on previous data (Strollo et al., 2014). Statistical analysis Following testing for normality, data were presented as mean (SD) unless otherwise indicated. Data were analysed using a three-way analysis of variance (ANOVA)) followed by a Tukey's test using the 'anovan' and 'multcompare' function of MATLAB (Version 2022B, MathWorks Ltd, Natick/MA, United States of America) to evaluate overall effects of three factors: a) TES (on/off), b) posture (seated, supine, HDT), and c) inspired gas (RA, hypercapnia, and hypoxia); furthermore, baroreflex and chemoreflex interaction was tested with the combination of these three factors. A level of significance was defined as p < 0.05. Results We studied 13 healthy subjects (age 29 (12) years, six female, BMI 23 (1.6) kg/m 2 , waist: hip (W: H) ratio 0.87 (0.05)) (Supplementary Table S1). Two more volunteers were unable to participate in the second visit and had incomplete datasets recorded for the primary outcome, these were not included in the analysis. Subjects used an electrical current of 8 (2) mA, which had been titrated to a comfortable and tolerable level of skin sensation. There were no adverse events, and no participant required electrical stimulation to be stopped. Cardiovascular variables Systolic blood pressure A marked reduction in systolic blood pressure during electrical stimulation was observed under hypoxic conditions in the HDT posture; there was also a trend towards reduction in other postures. (Table 1). Diastolic and Mean Blood Pressure A marked reduction in both diastolic and mean arterial blood pressures were also observed during electrical stimulation in supine and HDT postures. There was also a tendency towards a reduction in diastolic blood pressure during electrical stimulation when seated under hypoxic conditions, and during HDT both under hypercapnic and room air conditions. (Tables 2, 3). Pulse pressure There was no significant change to the pulse pressure when applying electrical stimulation, independent of different postures and gas mixtures. (Table 4). Heart rate The heart rate did not significantly change with electrical stimulation, and this observation was independent of posture or gas mixture. (Table 5). Respiratory variables There was no change in the respiratory rate with electrical stimulation in any of the three postures studied. (Table 6). Tidal volume and minute ventilation There was a trend towards increased tidal volume in the supine posture under hypercapnic conditions with electrical stimulation, Figure 4; details is provided in Supplementary Table S3). End-tidal carbon dioxide (ETCO 2 ) and oxygen saturation No changes in EtCO 2 during electrical stimulation in seated, supine or HDT posture were observed when breathing room air, under hypoxic or hypercapnic conditions. The SpO 2 did not change when comparing electrical stimulation to baseline in any posture or gas condition studied. Modified borg scale There was no significant change in the breathlessness scores when applying electrical stimulation, independent of posture and gas mixture. (Supplementary Table S2). Discussion The cardiorespiratory response to submental transcutaneous electrical stimulation, a novel therapeutic approach in OSA, applied during enhanced chemoreceptor (gas conditions) activation and baroreceptor (posture) loading demonstrates marked effects on cardiovascular control, with a modest effect on the respiratory control. Electrical stimulation appears to sensitise the arterial baroreceptor response resulting in decreased diastolic blood pressure, by 19%-25%, under hypoxic conditions (chemoreceptor) supine and with HDT (baroreceptor). The effect of electrical current on the systolic blood pressure was slightly less consistent, albeit a reduction of 16% was observed in HDT position under hypoxic conditions. There were no significant differences in the heart rate or the pulse pressure with electrical current; this was independent of posture or gas mixture used. Respiratory variables did not change significantly with electrical stimulation, except for the minute ventilation. Significance of findings A number of pathways are involved in the cardiovascular responses to systemic hypoxia (Marshall, 1994), involving the primary effects of peripheral chemoreceptor stimulation, secondary effects of ventilation, and direct effects of hypoxia on the heart and peripheral vasculature leading to subsequent effects on the autonomic and the central nervous system (Marshall, 1994). The full effects of ventilation, mediated by carbon dioxide and oxygen, on the cardiovascular system remain to be fully elucidated (Heistad et al., 1974). Importantly, there is cardiorespiratory interaction which is mediated via hypoxia and that affects the baroreflex response, as suggested by our observations. It has been described previously that stimulation of the chemoreceptors can lead to an increased heart rate and a change in the blood pressure (cardiovagal baroreflex) in humans (Bristow et al., 1971); (Bristow et al., 1974). This is further supported by recent evidence showing that exposure to hypoxia can alter the arterial baroreflex and change heart rate and sympathetic nerve activity with a higher blood pressure. (Heistad et al., 1974); (Halliwill and Minson, 2002); (Halliwill et al., 2003). Electrical stimulation targets the upper airway dilator muscles, particularly the genioglossus muscle (GG), and counteracts their diminished neuromuscular state-dependent tone which promotes upper airway collapsibility (Mezzanotte et al., 1996). Both the invasive and transcutaneous approaches to stimulating the upper airway dilator muscles are beneficial for maintaining airway patency during sleep in patients with OSA (Strollo et al., 2014;Pengo et al., 2016) improving the AHI by a mean of 9.1 (95% confidence interval, CI 2.0, 16.2) events/hour and the 4% ODI improved by a mean of 10.0 (95% CI 3.9, 16.0) events/hour (Pengo et al., 2016). Furthermore, the initial feasibility studies used TESLA with a current of 10.1 (3.7) mA (Steier et al., 2011). In the current study, electrical stimulation was well tolerated and had no adverse effects, underlining its safety for the use in the submental area and its efficacy in lowering diastolic blood pressure. In the context of potential long-term treatments for patients with OSA who have a high prevalence of cardiovascular comorbidities it is important to highlight that heart rate did not change significantly. A reduction in the blood pressure, systolic and diastolic, remains a favourable outcome for patients with sleep-disordered breathing, as the cardiovascular risk is typically raised and treatment resistant hypertension is of clinical relevance in this cohort (Antic et al., 2015). There are various interactions between different types of sleep apnoea and cardiovascular variables (e.g., blood pressure). On the one hand, central sleep apnoea is driven by heart failure (Javaheri and Javaheri, 2022). On the other hand, obstructive sleep apnoea leads to an increased sympathetic tone with may impact on the blood pressure contributing to hypertension (Antic et al., 2015); (Pengo et al., 2021); (Pengo et al., 2020). Limitations of the study This prospective physiological study had a relatively small sample size and certain interactions could become more significant with a larger sample size, for example the effect of the three factors on minute ventilation. The effects of TES on the primary outcome variable, diastolic blood pressure, were highly consistently observed in all subjects with a large effect size. Longer steady state periods could have had further advantages over quasisteady state achieved during the 5 min baseline periods used. The choice of this was pragmatic to allow for completion of what was a lengthy protocol and return of the healthy volunteers for a demanding second session. We were also limited with making causal inferences due to the observational nature of the design of the study. Despite a complex experimental setup, some parameters such as neural respiratory drive, blood gases, and perfusion could have provided helpful additional insights into the interaction between the cardiovascular, the respiratory, the peripheral autonomic and the central nervous system but were not measured on this occasion. In addition, this study focused on healthy subjects with normal blood pressure. Thus, further studies in subjects with hypertension and sleep-disordered breathing need to provide a comprehensive dataset on how electrical stimulation affects the chemo-and baroreceptor response in these clinically relevant cohorts. However, these points do not negate the insightful setup of a highly complex physiological experiment in human beings with a large effect size that enables to derive useful information for future clinical applications. Table S4. There was a significant interaction between factor b and c on minute ventilation. Data shown as median ±25 and 75% percentiles, behind individual data points. Frontiers in Physiology frontiersin.org Conclusion Electrical stimulation of the submental area affects the chemoand the baroreceptor response in normal healthy volunteers resulting in substantially lower levels of blood pressure. Similarly, inspired gas and posture impact on blood pressure regulation. Furthermore, electrical stimulation might modulate the cardiovascular risk in patients with hypertension and sleepdisordered breathing, a hypothesis that warrants further investigation in the respective clinical cohorts. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by The study was approved by the local research ethics committee (King's College London; RESCM-20/21-8487). The patients/participants provided their written informed consent to participate in this study.
2023-03-16T15:04:12.272Z
2023-03-14T00:00:00.000
{ "year": 2023, "sha1": "64e0260484fc902a7c558047de0771f53f31c636", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2023.1089837/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "6b3043e30072ac528395a43866c6498d396a6b35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
239506593
pes2o/s2orc
v3-fos-license
The effects of sulfated secondary bile acids on intestinal barrier function and immune response in an inflammatory in vitro human intestinal model Dysbiosis-related perturbations in bile acid (BA) metabolism were observed in inflammatory bowel disease (IBD) patients, which was characterized by increased levels of sulfated BAs at the expense of secondary BAs. However, the exact effects of sulfated BAs on the etiology of IBD are not investigated yet. Therefore, we aimed to investigate the effects of sulfated deoxycholic acid (DCA), sulfated lithocholic acid (LCA) and their unsulfated forms on intestinal barrier function and immune response. To this end, we first established a novel in vitro human intestinal model to mimic chronic intestinal inflammation as seen during IBD. This model consisted of a co-culture of Caco-2 and HT29-MTX-E12 cells grown on a semi-wet interface with mechanical stimulation to represent the mucus layer. A pro-inflammatory environment was created by combining the co-culture with LPS-activated dendritic cells (DCs) in the basolateral compartment. The presence of activated DCs caused a decrease in transepithelial electrical resistance (TEER), which was slightly restored by LCA and sulfated DCA. The expression of genes related to intestinal epithelial integrity and the mucus layer were slightly, but not significantly increased. These results imply that sulfated BAs have a minor effect on intestinal barrier function in Caco-2 and HT29-MTX-E12 cells. When exposed directly to DCs, our results point towards anti-inflammatory effects of secondary BAs, but to a minor extent for sulfated secondary BAs. Future research should focus on the importance of proper transformation of BAs by bacterial enzymes and the potential involvement of BA dysmetabolism in IBD progression. Introduction Inflammatory bowel disease (IBD) comprises a set of disorders that causes chronic and relapsing inflammation of the gastrointestinal tract. The etiology of IBD remains largely unknown, although it is clear that it is a multifactorial disease, in which the complex interplay between genetic susceptibility, environmental stimuli and the immune system are involved [1,2]. Furthermore, the gut microbiota is thought to play a major role in the onset and progression of IBD, which is emphasized by studies showing that gut microbiota composition in IBD patients is dysbiotic [3,4,5,6]. Dysbiosis is linked to disturbed intestinal barrier function, such as increased intestinal permeability [7] and an impaired mucus layer [8,9]. Impaired intestinal barrier function enables direct bacterial contact with the epithelial cell layer, thereby inducing an inflammatory response [10,11,12,13]. In a healthy situation, the intestinal mucosal immune system is tolerant against commensal bacteria, a process in which intestinal dendritic cells (DCs) play a crucial role [14,15,16]. During IBD, intestinal DCs have lost their tolerogenic function and produce elevated levels of pro-inflammatory cytokines, consequently leading to an exacerbated disease progression [17,18]. Importantly, dysbiosis is also linked to an altered production of bacterial metabolites, such as secondary bile acids (BAs) [3,19,20,21]. Primary BAs are synthesized in the liver, conjugated with taurine or glycine and secreted in the small intestine, where they accomplish a major role in lipid digestion [22]. BAs are actively reabsorbed in the ileum, transported back to the liver and metabolized by hepatic enzymes to be reused again, which is a process called the enterohepatic cycle [22]. Approximately 5% of all BAs are not reabsorbed and enter the colon, where resident bacteria deconjugate and metabolize them into secondary BAs. These secondary BAs can be either excreted via feces or reabsorbed and transported back to the liver. However, secondary BAs might be hepatotoxic at high concentrations and are therefore first detoxified by addition of a sulfonate group (SO 3-) [23]. As a result of IBD-related dysbiosis, the production of bacterial enzymes and thus BA metabolism can be disturbed, a process known as BA dysmetabolism [24]. Indeed, the capacity of the gut microbiota to deconjugate BAs and transform primary to secondary BAs was decreased in patients with active IBD. As a consequence, increased abundance of conjugated BAs and decreased abundance of secondary BAs in feces of IBD patients during both remission and active disease was detected, as compared to healthy people [3,25]. Similar differences in BA composition were found in other studies investigating fecal metabolite pools in IBD patients [19,20,26,27]. Interestingly, dysbiosis in IBD patients was also associated with a reduced desulfation capacity, which was concomitant with 15% higher levels of fecal sulfated BAs [3]. Likewise, increased levels of fecal 3-sulfodeoxycholic acid and chenodeoxycholic acid sulfate were found in Crohn's disease patients [20]. The fecal abundance of sulfated BAs was also found to be elevated in patients with non-inflammatory intestinal disorders, such as diarrhea-predominated irritable bowel syndrome [25,28]. Given the important signaling functions of secondary BAs, including their role in inflammatory pathways, a change in luminal BA composition may have consequences on the progression of IBD. However, the possible involvement of sulfated BAs is only based on associative studies and the causal effects remain elusive. Therefore, the aim of this study was to investigate the effects of sulfated BAs on intestinal barrier function and immune response. Since existing models often insufficiently approach the physiological representation of the intestinal barrier and inflammatory environment in the context of IBD, we first established a novel inflammatory in vitro human intestinal model. We included a co-culture of Caco-2 and HT29-MTX-E12 cells, which are both human colon carcinoma cell lines representing an enterotype and a mucus-producing cell line, respectively. To mimic the inflammatory state as observed during IBD, the co-culture was grown on cell culture inserts in combination with DCs in the basolateral compartment, which were activated with LPS to obtain pro-inflammatory properties. In contrast to existing models, our model had an improved mucus layer by growing the cells on a semi-wet interface with mechanical stimulation (SMWS) [29,30]. After exposure to sulfated deoxycholic acid (DCA), sulfated lithocholic acid (LCA) and their unsulfated forms for 24 h, the effects on intestinal barrier function and immune response were investigated. New insights into the role of BA dysmetabolism in IBD may contribute to the discovery of novel therapies that may add to the treatment of IBD. Cell model A co-culture of Caco-2 cells and HT29-MTX-E12 cells was seeded in 24-well ThinCert cell culture inserts with 0.4 μm pores (Greiner-Bio One, Alphen aan den Rijn, The Netherlands). Caco-2 and HT29-MTX-E12 cells were seeded in a 3:1 ratio, using a seeding density of 225,000 cells/mL in a volume of 150 μL. A volume of 700 μL DMEM was added to the basolateral compartment. Two days after seeding, media volumes were changed to 25 μL and 425 μL in the apical and basolateral compartment, respectively. The cell culture plates were put on a CO 2 resistant shaker (Thermo Fisher Scientific, Breda, The Netherlands) at 65 rpm. Cells were differentiated for 14 days and medium was changed every other day. Immature DCs were seeded in 24-wells plates in a density of 400,000 cells per well. DCs were stimulated with 10 ng/mL LPS (L3024, Sigma-Aldrich, Darmstadt, Germany) for 24 h. Maturation of DCs was checked on the CytoFLEX Flow Cytometer (Beckman Coulter, Woerden, The Netherlands) using CD14-ECD antibody, clone RMO52 (IM2707 U, Beckman Coulter), FITC anti-human CD83, clone HB15e and PE/ Cyanine7 anti-human CD209 (DC-SIGN), clone 9E9A8 antibodies (Bio-Legend, Amsterdam, The Netherlands). The culture inserts with Caco-2 and HT29-MTX-E12 cells were transferred to the cell culture plate containing the LPS-activated DCs, without changing of the LPS-containing DC culture medium. The co-culture was exposed to lithocholic acid 3-sulfate disodium salt (sulfo-LCA) (Santa-Cruz Biotechnology, Dallas, United States), deoxycholic acid 3-O-sulfate disodium salt (sulfo-DCA) (Toronto Research Chemicals, Toronto, Canada), lithocholic acid (LCA) and deoxycholic acid (DCA) (Sigma-Aldrich, Darmstadt, Germany). LCA and sulfo-LCA were solubilized in DMEM:methanol (1:1, v/v). DCA and sulfo-DCA were solubilized in DMEM:methanol (3:1, v/v). The concentrations of BAs used were based on physiological concentrations [3]. A control without DCs and a control with LPS-activated DCs were included. Control cells were exposed to similar concentrations of methanol (0.5%). Every condition was applied in duplicate. A total of three similar plates were seeded and exposed to BAs; plate 1 was used for permeability assays, plate 2 for RNA isolation and plate 3 three for protein isolation. Experiments where DCs were directly exposed to BAs were performed similarly, except that BAs were applied directly to the DCs. Quantification of lactate hydrogenase release To investigate the effects of BA exposure on cytotoxicity of Caco-2 and HT29-MTX-E12 cells and DCs, lactate hydrogenase levels were measured in conditioned medium collected directly after 24 h of BA exposure. To this end, a lactate dehydrogenase (LDH) cytotoxicity detection kit (Roche Applied Science; Almere, The Netherlands) was used following the manufacturer's instructions. As a control for complete cytotoxicity, cells were exposed for 15 min to a 1% Triton-X100 solution. Trans-and paracellular epithelial permeability assays Transepithelial resistance (TEER) was measured with an EVOM2 Volt/Ohm meter using STX2 electrodes (World Precision Instruments, Sarasota, United States). To assure the electrodes were fully submerged in medium, the media volumes were adapted to 100 μL apical and 700 μL basolateral before the first TEER measurements were performed. The TEER values after BA exposure were expressed as percentage of the TEER value measured just before BA exposure. After 24 h of BA exposure, culture inserts were washed twice with PBS and transferred to a new 24wells plate. Lucifer Yellow CH dilithium salt (L0259, Sigma) was dissolved in phenol red-free medium (Gibco) to 1 mg/mL and 100 μL was added to the apical compartment. In the basolateral compartment, 700 μL phenol red-free DMEM was added and afterwards the plate was incubated at 37 C/5% CO 2 for 3 h. Subsequently, 100 μL of the basolateral compartment was collected and fluorescence was measured at 425/515 nm (excitation/emission). An empty cell culture insert served as a control for complete paracellular permeability. RNA isolation and qRT-PCR The cell culture inserts of plate 2 were washed twice with ice-cold PBS and subsequently, 200 μL TRIzol reagent (ThermoFisher) was added per insert. The duplicates per condition were pooled to assure enough RNA yield. RNA was isolated using phenol/chloroform extraction. The RNA concentration was measured using a Nanodrop (Nanodrop ND-1000, Nanodrop Products, Maarssen, The Netherlands). A total of 1000 ng RNA was reverse transcribed using the RevertAid First Strand cDNA Synthesis kit (ThermoFisher). Real-time quantitative PCR was carried out using the SensiMix SYBR kit (Bioline, Alphen aan den Rijn, The Netherlands) in a CFX384 machine (Bio-Rad). Primer sequences are listed in Table 1. Data was normalized against the housekeeping gene GAPDH. Protein isolation and western immunoblotting The cell culture inserts were washed twice with ice-cold PBS and 100 μL RIPA buffer (ThermoFisher) enriched with protease-and phosphatase inhibitors (Roche Diagnostics) was added per culture insert. Duplicates were pooled to assure enough protein yield. Cell lysates were incubated on ice for 20 min following centrifugation for 10 min at 13,000 g. Protein concentrations of the supernatants were measured using a bicinchoninic acid assay (ThermoFisher). For each sample, 14.8 μg protein was loaded on a 4-15% Mini-PROTEAN TGX Precast gel (Bio-Rad). Proteins were separated by SDS gel electrophoresis and transferred onto a polyvinylidene difluoride (Trans-Blot Turbo Midi 0.2 μm PVDF Transfer Packs, Bio-Rad) membrane using the Transblot Turbo System (Bio-Rad). After blocking for 1 h at room temperature, the membranes were incubated overnight at 4 C with anti-ZO1 (Abcam ab216880), anti-OCLN (Abcam ab216327) and anti-HSP90 (Cell Signaling Technology 4874). ZO1 and OCLN antibodies were used in 1:1000 and for HSP90 1:5000 was used. Subsequently, membranes were incubated with HRP conjugated goat anti-rabbit IgG antibody (1:5000) (GenScript A00098) for 1 h at room temperature. All membrane incubations were in Tris-buffered saline with 0.1% Tween 20 (TBS-T) and 5% (w/v) skimmed dry milk. Washing in between steps was done in TBS-T. Blots were visualized with Clarity ECL substrate (Bio-Rad) using the ChemiDoc MP system (Bio-Rad). Quantification was performed using ImageLab software (Bio-Rad). Cytokine measurements Medium collected from the basolateral compartments was used for the assessment of cytokines. Levels of IL-6, IL-12/IL-23 p40 and TNF-α were measured with human DuoSet ELISA Development kits (R&D Systems, Abingdon, UK) following the manufacturer's instructions. Statistical analysis Data is presented as mean AE standard deviation (SD). GraphPad Prism version 5 (San Diego, CA, USA) was used for the statistical analyses. Differences between the control and BA-exposed groups were determined with an unpaired Student's t-test, unless stated otherwise. A value of p 0.05 was considered as statistically significant. A total of three biological replicates were performed. Establishment of an inflammatory in vitro human intestinal model consisting of Caco-2 and HT29-MTX-E12 cells combined with LPSactivated dendritic cells The first important step of this study was to establish an in vitro human intestinal model with an improved physiological representation of the intestinal barrier and inflammatory environment in the context of IBD. In Figure 1A, a schematic overview of the study design is given. Caco-2 and HT29-MTX-E12 cells were seeded in a 3:1 ratio on cell culture inserts and SWMS conditions were applied. In parallel, primary monocytes were isolated from three human buffy coats and differentiated into DCs. Activation with 10 ng/mL LPS for 24 h resulted in mature DCs expressing the DC surface markers CD83 and CD209 (Supplementary file 1). Activated DCs produced higher levels of IL-6 (p ¼ 0.0088) and IL-12p40 (p ¼ 0.1) compared to DCs that were not activated ( Figure 1B, C), although IL-12p40 levels of one biological replicate were relatively low ( Figure 1C). After 24 h of LPS exposure, the cell culture inserts with the Caco-2/HT29-MTX-E12 co-culture were positioned in the cell culture plates containing activated DCs. This resulted in a model consisting of intestinal cells in the apical compartment and LPS-activated DCs in the basolateral compartment ( Figure 1D). TEER values measured at 24 and 48 h after combination with activated DCs decreased with 12 and 45 percentage points, respectively, compared to the condition without basolateral DCs (p < 0.001 and p < 0.0001) ( Figure 1E). In the next BAexposure experiments, we used a pre-incubation period of 24 h. Altogether, we confirmed that the presence of activated DCs in the basolateral compartment caused a pro-inflammatory state, reflected by the elevated cytokine levels. This likely resulted in the observed increased intestinal permeability of the intestinal cells. Intestinal permeability was slightly restored by LCA and sulfated DCA under inflammatory conditions After the pre-incubation period, the co-cultures of Caco-2 and HT29-MTX-E12 cells were exposed to sulfated DCA, sulfated LCA and their unsulfated forms in different concentrations for another 24 h. Cytotoxicity measured by the release of LDH in the apical medium was not different between cells exposed to BAs compared to unexposed cells (data not shown). The TEER of all conditions exposed to BAs in the presence of activated DCs were significantly lower compared to the control without DCs (p < 0.0001) (Figure 2A). Exposure to sulfated DCA (200 μM) and both concentrations of LCA (10 μM and 50 μM) resulted in a slight, but significant restoration of the TEER (Figure 2A). The same cell culture inserts were subjected to a Lucifer Yellow assay to investigate if BA treatment had an effect on paracellular permeability. The flux of Lucifer Yellow from the apical to basolateral compartment was significantly lower in cells cultured without DCs compared to the control with DCs (p < 0.05) ( Figure 2B). None of the BAs had a significant additional effect on paracellular permeability. Expression of genes related to intestinal epithelial integrity tended to increase after BA exposure To further investigate the effects of sulfated secondary BAs on intestinal barrier function, we measured the expression of proteins related to intestinal epithelial integrity. In line with the significant TEER reduction (Figure 2A), lower protein levels of Occludin (OCLN) and Zonula Occludens-1 (ZO1) were measured in cells exposed to activated DCs compared to the control cells without DCs ( Figure 2C,D), but these differences were not significant. Next, we investigated whether these lower protein levels were the result of decreased mRNA levels. However, OCLN and ZO1 mRNA levels were not significantly affected by the presence of activated DCs in the basolateral compartment ( Figure 2E,F). Other genes related to intestinal barrier function, E-cadherin (CDH1) and Claudin-1 (CLDN1), were also not affected ( Figure 2G,H). Interestingly, protein levels of OCLN and ZO1 were not affected by BA exposure, whereas expression of OCLN, ZO1, CDH1 and CLDN1 followed an increasing trend after exposure to most BAs, although these differences were not significant ( Figure 2C-H). Together, these results indicate that differences in intestinal barrier function measured by TEER were partly reflected at gene and protein level. Differential expression of FXR-target genes by unsulfated, but not sulfated secondary BAs Next, we aimed to find out if exposure to sulfated and unsulfated secondary BAs resulted in activation of FXR. While DCA and LCA are potent activators of FXR [22], it is unknown whether the sulfated forms of these BAs also activate FXR, as these BAs are not, or poorly absorbed by enterocytes [23]. To this end, we investigated if exposure to DCA, LCA and their sulfated forms resulted in differential expression of a selection of FXR-target genes: ileal bile acid binding protein (IBABP, FABP6), fibroblast growth factor 19 (FGF19), basolateral organic solute transporters alpha and beta (OSTα/β, SLC51A/B), apical bile salt transporter (ASBT, SLC10A2) and sulfotransferase family 2A member 1 (SULT2A1) [31,32,33,34]. Interestingly, the addition of activated DCs potently reduced the expression of ASBT (p < 0.05) and SULT2A1 (p < 0.001) ( Figure 3A, F). ASBT was not differentially expressed by any of the BAs ( Figure 3A). In contrast, FABP6, FGF19 and OSTβ were significantly upregulated in cells exposed to DCA compared to the control with activated DCs ( Figure 3B,C, E). Interestingly, exposure to 100 μM DCA reduced SULT2A1 expression compared to the control cells with DCs ( Figure 3F). Altogether, these results indicate that DCA had pronounced effects on the expression of most FXR-target genes, while LCA did not have a significant effect. Exposure to neither sulfated DCA nor sulfated LCA resulted in a differential expression of any FXR-target genes. Expression of genes of interest is expressed relative to the control (Caco-2 and HT29-MTX-E12 cells exposed to activated DCs in basolateral compartment). *p < 0.05, **p < 0.01, ***p < 0.001 compared to condition with activated DCs. Importantly, mRNA levels of ASBT and SULT2A1 were significantly decreased by the presence of basolateral activated DCs. No effects of sulfated secondary BAs on MUC2 and MUC5AC expression In order to determine if sulfated secondary BAs had an effect on the mucus layer, we investigated the expression of MUC2, which is the most dominant gel-forming mucin present in the intestine. Moreover, we also measured expression of MUC5AC. This is another gel-forming mucin which is normally not secreted in the intestine, but is secreted in HT29-MTX-E12 cells, even after growing this cell type under SWMS conditions [29,30]. Interestingly, the presence of activated DCs decreased the expression of MUC2 and MUC5AC, although this effect was not statistically significant ( Figure 4A,B). Compared to the control with activated DCs, the expression of MUC2 seemed to increase after exposure to 100 μM DCA and 10 μM LCA, which was borderline significant (p ¼ 0.06 and p ¼ 0.08), respectively ( Figure 4A). Sulfated BA exposure did not have any effect on mucin mRNA expression. Subtle effect of some BAs on expression of genes encoding for antimicrobial peptides Antimicrobial peptides (AMPs) play an important role in intestinal innate immune defense and are known to be produced by enterocytes [35]. We measured the expression of genes encoding the AMPs defensin β-1 (DEFB1) and lysozyme (LYZ), but also angiogenin (ANG) and carbonic anhydrase 12 (CA12), since the latter two AMPs are regulated by the BA receptor FXR [36,37]. Exposure to BAs caused slight, but non-significant changes compared to the control with activated DCs Figure 4. Expression of mucin and antimicrobial peptides. A) Expression of Mucin 2 (MUC2) and B) Mucin 5AC (MUC5AC) and genes encoding the antimicrobial peptides (C-F) defensin β-1 (DEFB1), lysozyme (LYZ), carbonic anhydrase 12 (CA12) and angiogenin (ANG). Expression of genes of interest is expressed relative to the control (Caco-2 and HT29-MTX-E12 cells exposed to activated DCs in basolateral compartment). *p < 0.05 compared to condition with activated DCs. No indirect effects of BA exposure on cytokine production by basolateral DCs Although the presence of activated DCs in the basolateral compartment resulted in a significant increase in permeability of the Caco-2/ HT29-MTX-E12 co-culture, apical exposure to sulfated and unsulfated secondary BAs did not have a major additional effect on intestinal epithelial integrity (Figure 2A,B). We hypothesized that BAs might have migrated from the apical to the basolateral compartment via the openings between the intestinal epithelial cells, caused by the increased intestinal permeability. In that case, BAs might have come in contact with the DCs present in the basolateral compartment. Therefore, we investigated if this potential indirect contact between BAs and DCs caused an altered immune response by DCs. To this end, TNF-α and IL-12p40 levels were measured in conditioned medium from basolateral DCs after apical exposure to the different BAs. No differences in either TNF-α or IL-12p40 levels were found ( Figure 5A,B). 3.8. Decreasing, but no significant trend in TNF-α and IL-12p40 production by activated DCs after direct exposure to secondary BAs The finding that cytokine production by DCs was not affected by indirect BA exposure could either indicate that BAs were not migrated towards the basolateral compartment, or that DCs were not affected by BA exposure in terms of TNF-α and IL-12p40 production. To investigate if direct exposure to BAs caused an effect on immune response in DCs, we exposed activated DCs directly to sulfated and unsulfated secondary BAs under similar conditions as previous experiments with indirect exposure. DCA caused a decrease in both TNF-α and IL-12p40 levels compared to the control cells ( Figure 5C,D), but these differences were not significant. Lower IL-12p40 levels were found after LCA exposure, albeit variation between biological replicates was high ( Figure 5D, Supplementary file 1). No significant differences were found after exposure to sulfated BAs. Discussion The rising prevalence of IBD in many countries is alarming, given the concomitant increase in social and economic burden associated with this disease [38]. To decrease this burden, it is of utmost importance to better understand the underlying causes of IBD, especially because the etiology of IBD is still largely unknown. Emerging evidence suggests a potential role for BA dysmetabolism in IBD, however, the exact effects of elevated levels of IBD-associated BA subtypes are not widely investigated yet. In the present study, we aimed to investigate the effects of sulfated secondary BAs on intestinal barrier function in the context of IBD. Furthermore, we also investigated if sulfated BAs had an effect on immune response in human monocyte-derived DCs. We first aimed to establish an inflammatory in vitro human intestinal model, as existing models often insufficiently reflect the chronic inflammatory state in the context of IBD. For example, many existing models either add a cytokine cocktail to induce a pro-inflammatory state [39,40] or use THP-1 cells as representation of immune cells [41,42,43,44,45,46]. The effectiveness of this cell line in an intestinal model is questionable. In two studies, Caco-2 cells exposed to THP-1 cells were severely damaged after 48 h, which was reflected by the high cytotoxicity values and TEER decrease of more than 80% [44,46]. Given the crucial role of intestinal DCs in IBD pathophysiology [14,47], we used human monocyte-derived DCs in our model. After activation with LPS, these DCs produced high cytokine levels, resulting in an increased epithelial permeability without affecting cytotoxicity. To improve the physiological representativeness of our model even more, we also paid special attention to the mucus layer, since it is often underrepresented or even lacking in most existing intestinal in vitro models. Therefore, we cultured the Caco-2/HT29-MTX-E12 co-culture under SWMS conditions, which was shown to improve the quantity and quality of the mucus layer [29,30]. Importantly, the use of in vitro models has some limitations, e.g. with regard to the translatability of the in vivo situation. However, we deemed our model suitable at this, more explorative phase, of our study. After successful optimization, we exposed the inflammatory in vitro human intestinal immune model to sulfated and unsulfated secondary BAs for 24 h and investigated the effects on intestinal barrier function. We found a slight TEER restoration after exposure to LCA and sulfated DCA, but not DCA and sulfated LCA. These effects on intestinal epithelial barrier integrity were partly reflected at protein level. Previous in vitro studies also showed TEER restoration by LCA in the presence of inflammatory conditions [40,48]. With regard to DCA, we did not find an effect on TEER, while a marked increased permeability caused by DCA was observed in several in vitro models [49,50,51,52] as well as in mice [51,53,54]. Importantly, we confirmed successful administration of DCA by measuring differential expression of FXR-target genes. Differences in incubation duration and BA concentrations might have impeded direct comparison to existing literature and results of the current study. In line with the minor effects on intestinal epithelial barrier integrity, we did not find an effect of sulfated BAs on MUC2 and MUC5AC expression. On the contrary, DCA and LCA exposure resulted in an increased expression of MUC2, which was borderline significant. As MUC2 plays a crucial role in intestinal barrier protection [55,56,57,58], increased MUC2 mRNA expression might indicate that these BAs have a restorative effect on the mucus layer. In several human colon cancer cell lines, DCA also caused increased MUC2 expression [59,60], but no effects of LCA on mucin mRNA expression have been described. Importantly, it was previously shown that prolonged exposure to pro-inflammatory cytokines strongly decreased mucin gene expression [61,62]. These results are in line with the decreasing trend in MUC2 and MUC5AC expression that we found after exposure to activated DCs, although this effect was not significant. Next to the effects of BAs on the mucus barrier, it is also important to consider other intestinal barrier properties, such as AMPs that are excreted in the mucus layer [63]. Although DCA was previously shown to increase the expression and secretion of DEFB1/DEFB1 in vitro [64], we were not able to reproduce these results. We did find a slightly reduced expression of ANG by some BAs, which might imply that these BAs have a negative effect on mucosal defense [65]. However, the effects of BAs on AMPs are underexplored in current literature, indicating that more research is needed in this field. Secondary BAs could have anti-inflammatory effects during intestinal inflammation [27,66,67]. Since intestinal DCs are able to sample luminal content [47,68], we hypothesized that luminal BAs could come in contact with DCs, which might result in an altered immune response. Indeed, direct exposure to secondary BAs caused a decreasing trend in cytokine production, but this effect was less visible after exposure to sulfated secondary BAs. This finding might suggest that increased levels of sulfated BAs at the expense of secondary BAs could abolish the anti-inflammatory effects of secondary BAs. Similar effects were previously found in Caco-2 exposed to sulfated LCA [3], although this effect was found after exposure to relatively high concentrations of LCA and sulfated LCA (400 and 500 μM), which might hamper the physiological translatability of these results. Here, we present a novel and physiological relevant in vitro human intestinal model representing a pro-inflammatory state, which can be used to study intestinal barrier function in the presence of intestinal inflammation. We used this model to investigate the effects of sulfated and unsulfated secondary BAs on intestinal barrier function and immune response in DCs. We show that these BAs had ambiguous effects on intestinal barrier integrity, as reflected by the minor effects on TEER, expression of intestinal epithelial integrity related genes, AMPs and MUC2. Although more research is needed, our results hint towards antiinflammatory effects of secondary BAs, but not sulfated secondary BAs on activated DCs. Future research should focus on the relevance of proper bacterial desulfation activity to assure the anti-inflammatory effects of secondary BAs. Author contribution statement Benthe van der Lugt: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Maartje Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2022.e08883.
2021-10-23T15:11:29.116Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "2190803dca690cefbc1ff56bcc82b091fae00904", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.heliyon.2022.e08883", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e6abc6293be8e949d0fd8476c4a85b3b6572f49", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
232303246
pes2o/s2orc
v3-fos-license
Blinatumomab as first salvage versus second or later salvage in adults with relapsed/refractory B‐cell precursor acute lymphoblastic leukemia: Results of a pooled analysis Abstract Background Blinatumomab is a BiTE® immuno‐oncology therapy indicated for the treatment of patients with relapsed or refractory (r/r) B‐cell precursor (BCP) acute lymphoblastic leukemia (ALL). Aims To assess the efficacy and safety of blinatumomab as first salvage versus second or later salvage in patients with r/r BCP ALL. Materials & Methods Patient‐level pooled data were used for this analysis. In total, 532 adults with r/r BCP ALL treated with blinatumomab were included (first salvage, n = 165; second or later salvage, n = 367). Results Compared with patients who received blinatumomab as second or later salvage, those who received blinatumomab as first salvage had a longer median overall survival (OS; 10.4 vs. 5.7 months; HR, 1.58; 95% CI, 1.26–1.97; P < .001) and relapse‐free survival (10.1 vs. 7.3 months; HR, 1.38; 95% CI, 0.98–1.93; P = .061), and higher rates of remission (n = 89 [54%] vs. n = 150 [41%]; odds ratio, 0.59; 95% CI, 0.41–0.85; P = .005), minimal residual disease response (n = 68 [41%] vs. n = 118 [32%]), and allogeneic hematopoietic stem cell transplant (alloHSCT) realization (n = 60 [36%] vs. n = 88 [24%]), and alloHSCT in continuous remission (n = 33 [20%] vs. n = 52 (14%]). In a subgroup analysis, there was no apparent effect of prior alloHSCT on median OS in either salvage group. The safety profile of blinatumomab was generally similar between the groups; however, cytokine release syndrome, febrile neutropenia, and infection were more frequent with second or later salvage than with first salvage. Discussion In this pooled analysis, the logistic regression analyses indicated greater benefit with blinatumomab as first salvage than as second or later salvage, as evident by the longer median OS, longer median RFS, and higher rates of remission. Conclusion Overall, blinatumomab was beneficial as first salvage and as second or later salvage, but the effects were favorable as first salvage. | INTRODUCTION Outcomes are poor among patients with relapsed or refractory (r/r) acute lymphoblastic leukemia (ALL). The reported complete remission (CR) rate among patients with r/r B-cell precursor (BCP) ALL is 40% after first salvage, 21% after second salvage, and 11% after third or later salvage. 1 Oneyear survival rates among patients with r/r BCP ALL are 26% after first salvage, 14% after second salvage, and 12% after third or later salvage. 1 Thus, there is an unmet need for effective salvage therapies in r/r BCP ALL. Blinatumomab is a bispecific T-cell engager (BiTE ® (bispecific T-cell engager) immuno-oncology therapy that directs cytotoxic T cells to lyse CD19-positive B cells. [2][3][4] In two open-label, single-arm, phase 2 studies of blinatumomab in patients with r/r BCP ALL, CR was achieved by 33%-42% of patients and the median overall survival (OS) was 6.1-9.8 months. 5,6 In the randomized, openlabel, phase 3 TOWER study in patients (N = 405) with r/r Philadelphia chromosome-negative (Ph − ) BCP ALL, salvage treatment with blinatumomab compared with chemotherapy was associated with longer OS (7.7 vs. 4.0 months; hazard ratio [HR], 0.71; 95% confidence interval [CI], 0.55-0.93; P = .01) and a higher CR rate after 12 weeks of treatment (34% vs. 16%; P < .001). 7 Given the ability of blinatumomab to bridge to allogeneic hematopoietic stem cell transplant (alloHSCT) in 24%-67% of responders in these studies, there is potential for the improvement of OS among patients who achieve alloHSCT, particularly in later salvage. This pooled analysis included the two phase 2 studies and the phase 3 TOWER study, and assessed the efficacy and safety of blinatumomab as first salvage or second or later salvage in patients with r/r BCP ALL. | Patients, study design, and treatment The patient eligibility criteria, study designs, and treatments in the three studies included in this pooled analysis were described previously. [5][6][7] Patient-level pooled data were used for this analysis. All three studies enrolled adults (aged ≥18 years) with Eastern Cooperative Oncology Group (ECOG) performance status ≤2. In addition, patients in the first phase 2 study (ClinicalTrials.gov, NCT01209286) had BCP ALL relapsed (reappearance of disease after CR lasting ≥28 days) after induction and consolidation or refractory (no CR) after induction and/or consolidation, >5% bone marrow blasts, and life expectancy ≥12 weeks; those with Ph + ALL eligible for dasatinib or imatinib were excluded. 6 Patients in the second phase 2 study (ClinicalTrials.gov, NCT01466179) had Ph − BCP ALL primary refractory or relapsed (first relapse within 12 months of first remission, relapse within 12 months after alloHSCT, or no response to or relapse after first salvage therapy or beyond) and >10% bone marrow blasts. 5 Patients in the phase 3 study (ClinicalTrials.gov, NCT02013167) had Ph − BCP ALL refractory to primary induction therapy or to salvage with intensive combination chemotherapy, first relapse with the first remission lasting ≤12 months, second or greater relapse, or relapse at any time after alloHSCT and had >5% bone marrow blasts. 7 Patients received blinatumomab in cycles of 4-week continuous infusion followed by a 2-week treatment-free interval. Two induction cycles and up to three consolidation cycles were administered. Maintenance treatment was given every 12 weeks in the phase 3 study. Eligible patients received alloHSCT at the investigators' discretion. Before each dose of blinatumomab, dexamethasone was given as prophylaxis for cytokine release syndrome (CRS) alloHSCT on median OS in either salvage group. The safety profile of blinatumomab was generally similar between the groups; however, cytokine release syndrome, febrile neutropenia, and infection were more frequent with second or later salvage than with first salvage. Discussion: In this pooled analysis, the logistic regression analyses indicated greater benefit with blinatumomab as first salvage than as second or later salvage, as evident by the longer median OS, longer median RFS, and higher rates of remission. Conclusion: Overall, blinatumomab was beneficial as first salvage and as second or later salvage, but the effects were favorable as first salvage. | 2603 TOPP eT al. and neurologic events. All patients provided written, informed consent before enrollment. Institutional review board approval was obtained for each study. | Assessments Response was assessed at the end of each treatment cycle. CR was defined as ≤5% bone marrow blasts and no evidence of disease was further defined by the extent of peripheral blood counts: CR with full hematologic recovery (platelets >100,000/μL and absolute neutrophil count (ANC) >1000/μL), CR with partial hematologic recovery (CRh; platelets >50,000/μL and ANC >500/μL), or CR with incomplete hematologic recovery (CRi; platelets >100,000/μL or ANC >500/μL). Blast-free hypoplastic or aplastic bone marrow was defined as ≤5% bone marrow blasts, no evidence of disease, and insufficient recovery (platelets ≤50,000/μL and/or ANC ≤500/μL). Partial remission (PR) was defined as bone marrow blasts 6%-25% with a ≥ 50% reduction from baseline. Progressive disease was defined as ≥25% increase from baseline in bone marrow blasts or absolute increase from baseline in circulating leukemic cells of ≥5000/μL. Relapse was defined as >5% bone marrow or peripheral blood blasts after CR/CRh/CRi. Minimal residual disease (MRD) response was defined as <10 −4 detectable blasts per allele-specific real-time quantitative polymerase chain reaction. 8,9 All adverse events (AEs) were recorded and graded per the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.0. | Statistical analyses Patient incidences of response rates were calculated and accompanied by two-sided 95% exact binomial CIs. Timeto-event estimates were calculated using the Kaplan-Meier method. OS was defined as the time from first blinatumomab dose to death. Patients alive were censored at the last date known to be alive. Relapse-free survival (RFS) was defined as the time from first CR or CRh within the first two cycles of treatment to hematologic or extramedullary relapse or death. Patients alive in remission were censored at the date of last assessment. OS was compared between patients who received blinatumomab as first salvage and those who received blinatumomab as second or later salvage using an unstratified Cox regression model. CR/CRh rates after two cycles of treatment were compared between patients who received blinatumomab as first salvage versus those who received blinatumomab as second or later salvage using an unstratified logistic regression model. P-values <.05 were considered significant. | Patients Overall, 532 patients from three clinical studies of blinatumomab in patients with r/r BCP ALL were included in this pooled analysis. 5-7 Among these, 165 received blinatumomab as first salvage and 367 received blinatumomab as second or later salvage. Patients who received blinatumomab as first salvage tended to be older than those who received blinatumomab as second or later salvage (45 vs. 34 years) and had slightly better ECOG performance status ( Table 1. Notable proportions of patients at study entry had prior al-loHSCT (first salvage, 25%; second or later salvage, 38% and ≥50% bone marrow blasts (first salvage, 60%; second or later salvage, 68%. | Median OS and RFS Patients who received blinatumomab as second or later salvage had significantly shorter OS than patients who received blinatumomab as first salvage (HR, 1.58; 95% CI, 1.26-1.97; P < .001; Figure 1A). The median OS was 5.7 months (95% CI, 4.3-7.1) among those who received blinatumomab as second or later salvage and 10.4 months (95% CI, 8.3-14.3) among those who received blinatumomab as first salvage. The estimated OS rates among patients who received blinatumomab as second or later salvage compared with first salvage were 29% and 47%, respectively, at 12 months, 19% and 29% at 24 months, and 12% and 23% at 60 months. | Response and transplant realization Patients who received blinatumomab as second or later salvage were less likely to achieve CR or CRh after two cycles than those who received blinatumomab as first salvage (odds ratio [OR], 0.59; 95% CI, 0.41-0.85; p = 0.005). CR or CRh after two cycles was achieved by 150 of 367 (41%; 95% CI, 36-46) patients who received blinatumomab as second or later salvage and by 89 of 165 (54%; 95% CI, 46-62) patients who received blinatumomab as first salvage (Table 2). CR was achieved by 101 (28%) patients who received blinatumomab as second or later salvage and 78 (47%) patients who received blinatumomab as first salvage. CRh was achieved by 49 (13%) patients who received blinatumomab as second or later salvage and 11 (7%) patients who received blinatumomab as first salvage. PR was achieved by nine (3%) patients who received blinatumomab as second or later salvage and four (2%) patients who received blinatumomab as first salvage. Overall, MRD response was achieved by 68 (41%; 95% CI, 34-49) patients who received blinatumomab as first salvage and 118 (32%; 95% CI, 27-37) patients who received blinatumomab as second or later salvage (Table 2). Among those with CR or CRh, MRD response was achieved by 63 (71%; 95% CI, 60-80) patients who received blinatumomab as first salvage and 106 (71%; 95% CI, 63-78) patients who received blinatumomab as second or later salvage ( Table 2). The rate of MRD response in patients with CR/CRh was not different between those who received blinatumomab as first salvage and those who received blinatumomab as second or later salvage. Sixty (36%) patients who received blinatumomab as first salvage and 88 (24%) patients who received blinatumomab as second or later salvage went on to receive alloHSCT, including 42 (26%) and 61 (17%), respectively, who were in remission after two cycles ( Table 2). Thirty-three (20%) patients who received blinatumomab as first salvage and 52 (14%) patients who received blinatumomab as second or later salvage received alloHSCT during remission without additional anticancer therapy. There was no apparent difference in median OS among patients who received blinatumomab as first salvage or second and later salvage who went on to receive alloHSCT ( Figure 3). F I G U R E 2 Kaplan-Meier estimated relapse-free survival among patients who received blinatumomab as first salvage or second or later salvage. CI, confidence interval; OS, overall survival. | Adverse events The incidence rate of treatment-emergent AEs was consistent between patients who received blinatumomab as first salvage or as second or later salvage (99% vs. 99%; Table 3). The incidence rate of grade ≥3 treatment-emergent AEs was also similar between patients who received blinatumomab as first salvage or as second or later salvage (81% vs. 85%). The incidences of the most commonly occurring (in ≥10% of patients) AEs of any grade and their respective grade ≥3 incidences are summarized in Table 3. The proportions of patients with grade ≥3 AEs of interest among those who received blinatumomab as first salvage or as second or later salvage were as follows: neurologic events (13% vs. 15%), CRS (28% vs. 38%), infection (28% vs. 38%), neutropenia (20% vs. 15%), and febrile neutropenia (18% vs. 24%). The incidence rate of serious treatment-emergent AEs was somewhat lower among patients who received blinatumomab as first salvage compared with those who received blinatumomab as second or later salvage (60% vs. 66%), as was the frequency of discontinuations due to AEs (11% vs. 20%). The proportion of patients with fatal treatmentemergent AEs was lower among those who received blinatumomab as first salvage compared with second or later salvage (10% vs. 21%; Table 3); however, the proportion of patients with treatment-related fatal AEs was similar (2% vs. 3%). The three treatment-related fatal AEs occurring among patients who received blinatumomab as first salvage were bronchopulmonary aspergillosis, central nervous system infection, and sepsis syndrome. The 10 treatmentrelated fatal AEs occurring among patients who received blinatumomab as second or later salvage were sepsis, acute | DISCUSSION The randomized, open-label phase 3 TOWER study demonstrated significantly longer median OS and higher rates of CR with blinatumomab versus chemotherapy in patients with r/r Ph − BCP ALL. 7 Two prior phase 2 studies also showed efficacy with single-agent blinatumomab in patients with r/r BCP ALL. 5,6 In this pooled analysis (N = 532) of the two phase 2 studies and the TOWER study, blinatumomab was effective as first salvage and as second or later salvage. Notably, the logistic regression analyses indicated greater benefit with blinatumomab as first salvage than as second or later salvage, as evident by the longer median OS (10.4 vs. 5.7 months; HR, 1.58; p < 0.001), longer median RFS (10.1 vs. 7.3 months; HR, 1.38; p = 0.061), and higher rates of remission (54% vs. 41%; OR, 0.59; p = 0.005). Other studies have also shown better outcomes in patients who received blinatumomab a first salvage compared with those who received blinatumomab as second or later salvage. 1,10,11 Disease and patient characteristics have a considerable impact on response to treatment and outcome in patients with r/r BCP ALL. A large proportion (92%) of patients included in this pooled analysis was required to be either refractory or to have disease relapse within 1 year of first remission. In multivariate analyses, poor disease status at the time of salvage (e.g., refractory with prior transplant) and relapse within the first year of CR have been associated with shorter OS. 1,12 Notable proportions of patients in this analysis had received prior alloHSCT (first salvage, 25%; second or later salvage, 38%) or had ≥50% bone marrow blasts (first salvage, 60%; second or later salvage, 68%). Prior alloHSCT and higher levels of bone marrow blasts or white blood cells have each been associated with a shorter OS in patients with r/r BCP ALL. 1,[12][13][14] However, in the subgroup analysis presented here, there was no apparent effect of prior alloHSCT on median OS among patients who received blinatumomab as first salvage or as second or later salvage. MRD response is a predictor of outcomes in BCP ALL. 15,16 Achievement of MRD response with first salvage, but not with second salvage, has been associated with a longer OS and event-free survival in patients with r/r BCP ALL. 16 In this analysis, an MRD response occurred in 71% of patients with CR or CRh who received blinatumomab as first salvage or as second or later salvage, indicating further the potential for blinatumomab efficacy in later salvage in patients with r/r BCP ALL. Inducing a remission followed by HSCT is the primary goal of salvage therapy in patients with r/r Ph − BCP ALL. 17 In this analysis, 36% of patients who received blinatumomab as first salvage and 24% of patients who received blinatumomab as second or later salvage subsequently received al-loHSCT, including 20% and 14%, respectively, who were in continuous remission. In comparison, a retrospective analysis of study groups and centers in Europe and the United States found that 28% of patients with r/r Ph − BCP ALL received HSCT after first salvage, 49% of whom were in CR at the time of transplant. 1 The alloHSCT realization rates in this analysis are encouraging, particularly given the advanced disease of this patient population, and indicate that blinatumomab is effective at bridging to transplant both as first salvage and as second or later salvage. T A B L E 3 Summary of adverse events. The safety profile of blinatumomab was generally similar among patients treated as first salvage and those treated with blinatumomab as second or later salvage in this pooled analysis. However, the incidence rate of serious treatmentemergent AEs was slightly higher among patients treated in second or later salvage compared with first salvage (66% vs. 60%), as was the frequency of treatment-emergent fatal AEs (21% vs. 10%). Notably, however, there was no appreciable difference between the groups in the proportion of treatmentrelated fatal AEs (first salvage, 2%; second or later salvage, 3%). Certain AEs of interest were more common among patients who received blinatumomab as second or later salvage compared with those who received blinatumomab as first salvage (CRS, infection, and febrile neutropenia), whereas others were not (neurologic events and neutropenia). These differences are not surprising given that patients receiving later salvage often have more advanced disease, poorer prognosis, and poorer performance status than patients receiving earlier lines of therapy. The occurrence of neurologic events and CRS does not preclude treatment with blinatumomab since these were managed successfully with dexamethasone and treatment interruption in the studies included in this analysis [5][6][7] and in other studies of blinatumomab. [18][19][20] There are a few limitations of this pooled analysis that should be considered. First, because of the design of the studies included, there was an imbalance in the number of patients who received blinatumomab as first salvage (n = 165) compared with those who received blinatumomab as second or later salvage therapy (n = 367). Second, patients with Ph + ALL were not excluded from enrollment in the first phase 2 study 6 ; however, only two patients overall in the analysis had Ph + BCP ALL. Finally, the impact of prior inotuzumab ozogamicin (INO), an anti-CD22 monoclonal antibodycalicheamicin conjugate, was not evaluated in this study as INO was approved for the treatment of adults with r/r BCP ALL after the studies reported here. 21 Clinical trials evaluating the sequencing and combination of blinatumomab with INO are ongoing NIH--National Cancer Institute 22 : https:// www.cancer.gov/about -cance r/treat ment/clini cal-trial s/inter venti on/inotu zumab -ozoga micin). In conclusion, although blinatumomab as first salvage and as second or later salvage induced remission, bridged to HSCT, and showed benefits in median OS and RFS in this population of patients with r/r BCP ALL, the greatest benefit was for blinatumomab as first salvage. ACCESSIBILIT Y Qualified researchers may request data from Amgen clinical trials. Complete details are available at http://www.amgen. com/datas haring.
2021-03-22T17:18:36.071Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "4f3ffdcf2242e2b74c20c2baaff57218f1ad5597", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.3731", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c9601c584b01cc599a3bd00fa5910fd3d137f9d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }